00:00:00.000 Started by upstream project "autotest-nightly" build number 4310 00:00:00.000 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3673 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.084 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.084 The recommended git tool is: git 00:00:00.085 using credential 00000000-0000-0000-0000-000000000002 00:00:00.086 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.120 Fetching changes from the remote Git repository 00:00:00.123 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.169 Using shallow fetch with depth 1 00:00:00.169 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.169 > git --version # timeout=10 00:00:00.210 > git --version # 'git version 2.39.2' 00:00:00.210 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.241 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.241 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:08.086 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:08.097 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:08.108 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:08.108 > git config core.sparsecheckout # timeout=10 00:00:08.119 > git read-tree -mu HEAD # timeout=10 00:00:08.135 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:08.158 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:08.158 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:08.253 [Pipeline] Start of Pipeline 00:00:08.270 [Pipeline] library 00:00:08.272 Loading library shm_lib@master 00:00:08.272 Library shm_lib@master is cached. Copying from home. 00:00:08.288 [Pipeline] node 00:00:08.301 Running on VM-host-WFP7 in /var/jenkins/workspace/raid-vg-autotest 00:00:08.303 [Pipeline] { 00:00:08.313 [Pipeline] catchError 00:00:08.315 [Pipeline] { 00:00:08.330 [Pipeline] wrap 00:00:08.339 [Pipeline] { 00:00:08.348 [Pipeline] stage 00:00:08.350 [Pipeline] { (Prologue) 00:00:08.365 [Pipeline] echo 00:00:08.366 Node: VM-host-WFP7 00:00:08.371 [Pipeline] cleanWs 00:00:08.379 [WS-CLEANUP] Deleting project workspace... 00:00:08.379 [WS-CLEANUP] Deferred wipeout is used... 00:00:08.386 [WS-CLEANUP] done 00:00:08.573 [Pipeline] setCustomBuildProperty 00:00:08.642 [Pipeline] httpRequest 00:00:09.235 [Pipeline] echo 00:00:09.236 Sorcerer 10.211.164.101 is alive 00:00:09.246 [Pipeline] retry 00:00:09.248 [Pipeline] { 00:00:09.263 [Pipeline] httpRequest 00:00:09.268 HttpMethod: GET 00:00:09.269 URL: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.269 Sending request to url: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.280 Response Code: HTTP/1.1 200 OK 00:00:09.281 Success: Status code 200 is in the accepted range: 200,404 00:00:09.282 Saving response body to /var/jenkins/workspace/raid-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:14.473 [Pipeline] } 00:00:14.490 [Pipeline] // retry 00:00:14.499 [Pipeline] sh 00:00:14.786 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:14.804 [Pipeline] httpRequest 00:00:15.925 [Pipeline] echo 00:00:15.927 Sorcerer 10.211.164.101 is alive 00:00:15.937 [Pipeline] retry 00:00:15.939 [Pipeline] { 00:00:15.955 [Pipeline] httpRequest 00:00:15.960 HttpMethod: GET 00:00:15.961 URL: http://10.211.164.101/packages/spdk_35cd3e84d4a92eacc8c9de6c2cd81450ef5bcc54.tar.gz 00:00:15.961 Sending request to url: http://10.211.164.101/packages/spdk_35cd3e84d4a92eacc8c9de6c2cd81450ef5bcc54.tar.gz 00:00:15.973 Response Code: HTTP/1.1 200 OK 00:00:15.974 Success: Status code 200 is in the accepted range: 200,404 00:00:15.974 Saving response body to /var/jenkins/workspace/raid-vg-autotest/spdk_35cd3e84d4a92eacc8c9de6c2cd81450ef5bcc54.tar.gz 00:01:21.791 [Pipeline] } 00:01:21.809 [Pipeline] // retry 00:01:21.817 [Pipeline] sh 00:01:22.100 + tar --no-same-owner -xf spdk_35cd3e84d4a92eacc8c9de6c2cd81450ef5bcc54.tar.gz 00:01:24.653 [Pipeline] sh 00:01:24.936 + git -C spdk log --oneline -n5 00:01:24.936 35cd3e84d bdev/part: Pass through dif_check_flags via dif_check_flags_exclude_mask 00:01:24.936 01a2c4855 bdev/passthru: Pass through dif_check_flags via dif_check_flags_exclude_mask 00:01:24.936 9094b9600 bdev: Assert to check if I/O pass dif_check_flags not enabled by bdev 00:01:24.936 2e10c84c8 nvmf: Expose DIF type of namespace to host again 00:01:24.936 38b931b23 nvmf: Set bdev_ext_io_opts::dif_check_flags_exclude_mask for read/write 00:01:24.958 [Pipeline] writeFile 00:01:24.977 [Pipeline] sh 00:01:25.259 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:25.271 [Pipeline] sh 00:01:25.554 + cat autorun-spdk.conf 00:01:25.554 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:25.554 SPDK_RUN_ASAN=1 00:01:25.554 SPDK_RUN_UBSAN=1 00:01:25.554 SPDK_TEST_RAID=1 00:01:25.554 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:25.562 RUN_NIGHTLY=1 00:01:25.564 [Pipeline] } 00:01:25.577 [Pipeline] // stage 00:01:25.592 [Pipeline] stage 00:01:25.594 [Pipeline] { (Run VM) 00:01:25.607 [Pipeline] sh 00:01:25.888 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:25.888 + echo 'Start stage prepare_nvme.sh' 00:01:25.888 Start stage prepare_nvme.sh 00:01:25.888 + [[ -n 3 ]] 00:01:25.888 + disk_prefix=ex3 00:01:25.888 + [[ -n /var/jenkins/workspace/raid-vg-autotest ]] 00:01:25.888 + [[ -e /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf ]] 00:01:25.888 + source /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf 00:01:25.888 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:25.888 ++ SPDK_RUN_ASAN=1 00:01:25.888 ++ SPDK_RUN_UBSAN=1 00:01:25.888 ++ SPDK_TEST_RAID=1 00:01:25.888 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:25.888 ++ RUN_NIGHTLY=1 00:01:25.888 + cd /var/jenkins/workspace/raid-vg-autotest 00:01:25.888 + nvme_files=() 00:01:25.888 + declare -A nvme_files 00:01:25.888 + backend_dir=/var/lib/libvirt/images/backends 00:01:25.888 + nvme_files['nvme.img']=5G 00:01:25.888 + nvme_files['nvme-cmb.img']=5G 00:01:25.888 + nvme_files['nvme-multi0.img']=4G 00:01:25.888 + nvme_files['nvme-multi1.img']=4G 00:01:25.888 + nvme_files['nvme-multi2.img']=4G 00:01:25.888 + nvme_files['nvme-openstack.img']=8G 00:01:25.888 + nvme_files['nvme-zns.img']=5G 00:01:25.888 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:25.888 + (( SPDK_TEST_FTL == 1 )) 00:01:25.888 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:25.888 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:25.888 + for nvme in "${!nvme_files[@]}" 00:01:25.888 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi2.img -s 4G 00:01:25.888 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:25.888 + for nvme in "${!nvme_files[@]}" 00:01:25.888 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-cmb.img -s 5G 00:01:25.888 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:25.888 + for nvme in "${!nvme_files[@]}" 00:01:25.888 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-openstack.img -s 8G 00:01:25.888 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:25.888 + for nvme in "${!nvme_files[@]}" 00:01:25.888 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-zns.img -s 5G 00:01:25.888 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:25.888 + for nvme in "${!nvme_files[@]}" 00:01:25.888 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi1.img -s 4G 00:01:25.888 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:25.888 + for nvme in "${!nvme_files[@]}" 00:01:25.888 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi0.img -s 4G 00:01:25.888 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:25.888 + for nvme in "${!nvme_files[@]}" 00:01:25.888 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme.img -s 5G 00:01:26.147 Formatting '/var/lib/libvirt/images/backends/ex3-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:26.147 ++ sudo grep -rl ex3-nvme.img /etc/libvirt/qemu 00:01:26.147 + echo 'End stage prepare_nvme.sh' 00:01:26.147 End stage prepare_nvme.sh 00:01:26.159 [Pipeline] sh 00:01:26.441 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:26.441 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 -b /var/lib/libvirt/images/backends/ex3-nvme.img -b /var/lib/libvirt/images/backends/ex3-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex3-nvme-multi1.img:/var/lib/libvirt/images/backends/ex3-nvme-multi2.img -H -a -v -f fedora39 00:01:26.441 00:01:26.441 DIR=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant 00:01:26.441 SPDK_DIR=/var/jenkins/workspace/raid-vg-autotest/spdk 00:01:26.441 VAGRANT_TARGET=/var/jenkins/workspace/raid-vg-autotest 00:01:26.441 HELP=0 00:01:26.441 DRY_RUN=0 00:01:26.441 NVME_FILE=/var/lib/libvirt/images/backends/ex3-nvme.img,/var/lib/libvirt/images/backends/ex3-nvme-multi0.img, 00:01:26.441 NVME_DISKS_TYPE=nvme,nvme, 00:01:26.441 NVME_AUTO_CREATE=0 00:01:26.441 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex3-nvme-multi1.img:/var/lib/libvirt/images/backends/ex3-nvme-multi2.img, 00:01:26.441 NVME_CMB=,, 00:01:26.441 NVME_PMR=,, 00:01:26.441 NVME_ZNS=,, 00:01:26.441 NVME_MS=,, 00:01:26.441 NVME_FDP=,, 00:01:26.441 SPDK_VAGRANT_DISTRO=fedora39 00:01:26.441 SPDK_VAGRANT_VMCPU=10 00:01:26.441 SPDK_VAGRANT_VMRAM=12288 00:01:26.441 SPDK_VAGRANT_PROVIDER=libvirt 00:01:26.441 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:26.441 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:26.441 SPDK_OPENSTACK_NETWORK=0 00:01:26.441 VAGRANT_PACKAGE_BOX=0 00:01:26.441 VAGRANTFILE=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:26.441 FORCE_DISTRO=true 00:01:26.441 VAGRANT_BOX_VERSION= 00:01:26.441 EXTRA_VAGRANTFILES= 00:01:26.441 NIC_MODEL=virtio 00:01:26.441 00:01:26.441 mkdir: created directory '/var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt' 00:01:26.441 /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt /var/jenkins/workspace/raid-vg-autotest 00:01:28.367 Bringing machine 'default' up with 'libvirt' provider... 00:01:28.936 ==> default: Creating image (snapshot of base box volume). 00:01:28.936 ==> default: Creating domain with the following settings... 00:01:28.936 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1732733978_be621e641cd229b3e59e 00:01:28.936 ==> default: -- Domain type: kvm 00:01:28.936 ==> default: -- Cpus: 10 00:01:28.936 ==> default: -- Feature: acpi 00:01:28.936 ==> default: -- Feature: apic 00:01:28.936 ==> default: -- Feature: pae 00:01:28.936 ==> default: -- Memory: 12288M 00:01:28.936 ==> default: -- Memory Backing: hugepages: 00:01:28.936 ==> default: -- Management MAC: 00:01:28.936 ==> default: -- Loader: 00:01:28.936 ==> default: -- Nvram: 00:01:28.936 ==> default: -- Base box: spdk/fedora39 00:01:28.936 ==> default: -- Storage pool: default 00:01:28.936 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1732733978_be621e641cd229b3e59e.img (20G) 00:01:28.936 ==> default: -- Volume Cache: default 00:01:28.936 ==> default: -- Kernel: 00:01:28.936 ==> default: -- Initrd: 00:01:28.936 ==> default: -- Graphics Type: vnc 00:01:28.936 ==> default: -- Graphics Port: -1 00:01:28.936 ==> default: -- Graphics IP: 127.0.0.1 00:01:28.936 ==> default: -- Graphics Password: Not defined 00:01:28.936 ==> default: -- Video Type: cirrus 00:01:28.936 ==> default: -- Video VRAM: 9216 00:01:28.936 ==> default: -- Sound Type: 00:01:28.936 ==> default: -- Keymap: en-us 00:01:28.936 ==> default: -- TPM Path: 00:01:28.936 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:28.936 ==> default: -- Command line args: 00:01:28.936 ==> default: -> value=-device, 00:01:28.936 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:28.936 ==> default: -> value=-drive, 00:01:28.936 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme.img,if=none,id=nvme-0-drive0, 00:01:28.936 ==> default: -> value=-device, 00:01:28.936 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:28.936 ==> default: -> value=-device, 00:01:28.936 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:28.936 ==> default: -> value=-drive, 00:01:28.936 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:01:28.936 ==> default: -> value=-device, 00:01:28.936 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:28.936 ==> default: -> value=-drive, 00:01:28.936 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:01:28.936 ==> default: -> value=-device, 00:01:28.936 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:28.936 ==> default: -> value=-drive, 00:01:28.936 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:01:28.936 ==> default: -> value=-device, 00:01:28.936 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:28.936 ==> default: Creating shared folders metadata... 00:01:28.936 ==> default: Starting domain. 00:01:30.315 ==> default: Waiting for domain to get an IP address... 00:01:48.419 ==> default: Waiting for SSH to become available... 00:01:48.419 ==> default: Configuring and enabling network interfaces... 00:01:53.769 default: SSH address: 192.168.121.106:22 00:01:53.769 default: SSH username: vagrant 00:01:53.769 default: SSH auth method: private key 00:01:56.310 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:04.444 ==> default: Mounting SSHFS shared folder... 00:02:06.988 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:02:06.988 ==> default: Checking Mount.. 00:02:08.370 ==> default: Folder Successfully Mounted! 00:02:08.370 ==> default: Running provisioner: file... 00:02:09.752 default: ~/.gitconfig => .gitconfig 00:02:10.013 00:02:10.013 SUCCESS! 00:02:10.013 00:02:10.013 cd to /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:02:10.013 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:10.013 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:02:10.013 00:02:10.023 [Pipeline] } 00:02:10.035 [Pipeline] // stage 00:02:10.043 [Pipeline] dir 00:02:10.043 Running in /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt 00:02:10.045 [Pipeline] { 00:02:10.054 [Pipeline] catchError 00:02:10.055 [Pipeline] { 00:02:10.065 [Pipeline] sh 00:02:10.347 + vagrant ssh-config --host vagrant 00:02:10.347 + sed -ne /^Host/,$p 00:02:10.347 + tee ssh_conf 00:02:12.883 Host vagrant 00:02:12.883 HostName 192.168.121.106 00:02:12.883 User vagrant 00:02:12.883 Port 22 00:02:12.883 UserKnownHostsFile /dev/null 00:02:12.883 StrictHostKeyChecking no 00:02:12.883 PasswordAuthentication no 00:02:12.883 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:12.883 IdentitiesOnly yes 00:02:12.883 LogLevel FATAL 00:02:12.883 ForwardAgent yes 00:02:12.883 ForwardX11 yes 00:02:12.883 00:02:12.898 [Pipeline] withEnv 00:02:12.901 [Pipeline] { 00:02:12.914 [Pipeline] sh 00:02:13.201 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:02:13.201 source /etc/os-release 00:02:13.201 [[ -e /image.version ]] && img=$(< /image.version) 00:02:13.201 # Minimal, systemd-like check. 00:02:13.201 if [[ -e /.dockerenv ]]; then 00:02:13.201 # Clear garbage from the node's name: 00:02:13.201 # agt-er_autotest_547-896 -> autotest_547-896 00:02:13.201 # $HOSTNAME is the actual container id 00:02:13.201 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:13.201 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:13.201 # We can assume this is a mount from a host where container is running, 00:02:13.201 # so fetch its hostname to easily identify the target swarm worker. 00:02:13.201 container="$(< /etc/hostname) ($agent)" 00:02:13.201 else 00:02:13.201 # Fallback 00:02:13.201 container=$agent 00:02:13.201 fi 00:02:13.201 fi 00:02:13.201 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:13.201 00:02:13.474 [Pipeline] } 00:02:13.491 [Pipeline] // withEnv 00:02:13.499 [Pipeline] setCustomBuildProperty 00:02:13.514 [Pipeline] stage 00:02:13.516 [Pipeline] { (Tests) 00:02:13.532 [Pipeline] sh 00:02:13.817 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:14.092 [Pipeline] sh 00:02:14.376 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:14.652 [Pipeline] timeout 00:02:14.652 Timeout set to expire in 1 hr 30 min 00:02:14.655 [Pipeline] { 00:02:14.670 [Pipeline] sh 00:02:14.955 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:15.526 HEAD is now at 35cd3e84d bdev/part: Pass through dif_check_flags via dif_check_flags_exclude_mask 00:02:15.539 [Pipeline] sh 00:02:15.824 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:16.100 [Pipeline] sh 00:02:16.385 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:16.663 [Pipeline] sh 00:02:16.950 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=raid-vg-autotest ./autoruner.sh spdk_repo 00:02:17.210 ++ readlink -f spdk_repo 00:02:17.210 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:17.210 + [[ -n /home/vagrant/spdk_repo ]] 00:02:17.210 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:17.210 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:17.210 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:17.210 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:17.210 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:17.210 + [[ raid-vg-autotest == pkgdep-* ]] 00:02:17.210 + cd /home/vagrant/spdk_repo 00:02:17.210 + source /etc/os-release 00:02:17.210 ++ NAME='Fedora Linux' 00:02:17.210 ++ VERSION='39 (Cloud Edition)' 00:02:17.210 ++ ID=fedora 00:02:17.210 ++ VERSION_ID=39 00:02:17.210 ++ VERSION_CODENAME= 00:02:17.210 ++ PLATFORM_ID=platform:f39 00:02:17.210 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:17.210 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:17.210 ++ LOGO=fedora-logo-icon 00:02:17.210 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:17.210 ++ HOME_URL=https://fedoraproject.org/ 00:02:17.210 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:17.210 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:17.210 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:17.210 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:17.210 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:17.210 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:17.210 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:17.210 ++ SUPPORT_END=2024-11-12 00:02:17.210 ++ VARIANT='Cloud Edition' 00:02:17.210 ++ VARIANT_ID=cloud 00:02:17.210 + uname -a 00:02:17.210 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:17.210 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:17.781 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:17.781 Hugepages 00:02:17.781 node hugesize free / total 00:02:17.781 node0 1048576kB 0 / 0 00:02:17.781 node0 2048kB 0 / 0 00:02:17.781 00:02:17.782 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:17.782 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:17.782 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:17.782 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:02:17.782 + rm -f /tmp/spdk-ld-path 00:02:17.782 + source autorun-spdk.conf 00:02:17.782 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:17.782 ++ SPDK_RUN_ASAN=1 00:02:17.782 ++ SPDK_RUN_UBSAN=1 00:02:17.782 ++ SPDK_TEST_RAID=1 00:02:17.782 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:17.782 ++ RUN_NIGHTLY=1 00:02:17.782 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:17.782 + [[ -n '' ]] 00:02:17.782 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:18.041 + for M in /var/spdk/build-*-manifest.txt 00:02:18.041 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:18.042 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:18.042 + for M in /var/spdk/build-*-manifest.txt 00:02:18.042 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:18.042 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:18.042 + for M in /var/spdk/build-*-manifest.txt 00:02:18.042 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:18.042 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:18.042 ++ uname 00:02:18.042 + [[ Linux == \L\i\n\u\x ]] 00:02:18.042 + sudo dmesg -T 00:02:18.042 + sudo dmesg --clear 00:02:18.042 + dmesg_pid=5423 00:02:18.042 + [[ Fedora Linux == FreeBSD ]] 00:02:18.042 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:18.042 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:18.042 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:18.042 + sudo dmesg -Tw 00:02:18.042 + [[ -x /usr/src/fio-static/fio ]] 00:02:18.042 + export FIO_BIN=/usr/src/fio-static/fio 00:02:18.042 + FIO_BIN=/usr/src/fio-static/fio 00:02:18.042 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:18.042 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:18.042 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:18.042 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:18.042 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:18.042 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:18.042 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:18.042 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:18.042 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:18.042 19:00:27 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:02:18.042 19:00:27 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:18.042 19:00:27 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:18.042 19:00:27 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_RUN_ASAN=1 00:02:18.042 19:00:27 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_RUN_UBSAN=1 00:02:18.042 19:00:27 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_RAID=1 00:02:18.042 19:00:27 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:18.042 19:00:27 -- spdk_repo/autorun-spdk.conf@6 -- $ RUN_NIGHTLY=1 00:02:18.042 19:00:27 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:02:18.042 19:00:27 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:18.302 19:00:27 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:02:18.302 19:00:27 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:18.302 19:00:27 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:18.302 19:00:27 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:18.302 19:00:27 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:18.302 19:00:27 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:18.302 19:00:27 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:18.302 19:00:27 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:18.302 19:00:27 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:18.302 19:00:27 -- paths/export.sh@5 -- $ export PATH 00:02:18.302 19:00:27 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:18.302 19:00:27 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:18.302 19:00:27 -- common/autobuild_common.sh@493 -- $ date +%s 00:02:18.302 19:00:27 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732734027.XXXXXX 00:02:18.302 19:00:27 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732734027.dHf5ID 00:02:18.302 19:00:27 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:02:18.302 19:00:27 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:02:18.302 19:00:27 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:02:18.302 19:00:27 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:18.302 19:00:27 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:18.302 19:00:27 -- common/autobuild_common.sh@509 -- $ get_config_params 00:02:18.302 19:00:27 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:02:18.302 19:00:27 -- common/autotest_common.sh@10 -- $ set +x 00:02:18.302 19:00:27 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f' 00:02:18.302 19:00:27 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:02:18.302 19:00:27 -- pm/common@17 -- $ local monitor 00:02:18.302 19:00:27 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:18.302 19:00:27 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:18.302 19:00:27 -- pm/common@25 -- $ sleep 1 00:02:18.302 19:00:27 -- pm/common@21 -- $ date +%s 00:02:18.302 19:00:27 -- pm/common@21 -- $ date +%s 00:02:18.302 19:00:27 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732734027 00:02:18.302 19:00:27 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732734027 00:02:18.302 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732734027_collect-vmstat.pm.log 00:02:18.302 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732734027_collect-cpu-load.pm.log 00:02:19.242 19:00:28 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:02:19.242 19:00:28 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:19.242 19:00:28 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:19.242 19:00:28 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:19.242 19:00:28 -- spdk/autobuild.sh@16 -- $ date -u 00:02:19.242 Wed Nov 27 07:00:28 PM UTC 2024 00:02:19.242 19:00:28 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:19.242 v25.01-pre-276-g35cd3e84d 00:02:19.242 19:00:28 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:02:19.242 19:00:28 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:02:19.242 19:00:28 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:19.242 19:00:28 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:19.242 19:00:28 -- common/autotest_common.sh@10 -- $ set +x 00:02:19.242 ************************************ 00:02:19.242 START TEST asan 00:02:19.242 ************************************ 00:02:19.242 using asan 00:02:19.242 19:00:28 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:02:19.242 00:02:19.242 real 0m0.000s 00:02:19.242 user 0m0.000s 00:02:19.242 sys 0m0.000s 00:02:19.242 19:00:28 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:19.242 19:00:28 asan -- common/autotest_common.sh@10 -- $ set +x 00:02:19.242 ************************************ 00:02:19.242 END TEST asan 00:02:19.242 ************************************ 00:02:19.503 19:00:28 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:19.503 19:00:28 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:19.503 19:00:28 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:19.503 19:00:28 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:19.503 19:00:28 -- common/autotest_common.sh@10 -- $ set +x 00:02:19.503 ************************************ 00:02:19.503 START TEST ubsan 00:02:19.503 ************************************ 00:02:19.503 using ubsan 00:02:19.503 19:00:28 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:02:19.503 00:02:19.503 real 0m0.000s 00:02:19.503 user 0m0.000s 00:02:19.503 sys 0m0.000s 00:02:19.503 19:00:28 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:19.503 19:00:28 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:19.503 ************************************ 00:02:19.503 END TEST ubsan 00:02:19.503 ************************************ 00:02:19.503 19:00:28 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:19.503 19:00:28 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:19.503 19:00:28 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:19.503 19:00:28 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:19.503 19:00:28 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:19.503 19:00:28 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:19.503 19:00:28 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:19.503 19:00:28 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:19.503 19:00:28 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-shared 00:02:19.503 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:19.503 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:20.078 Using 'verbs' RDMA provider 00:02:36.366 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:02:51.260 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:02:52.097 Creating mk/config.mk...done. 00:02:52.097 Creating mk/cc.flags.mk...done. 00:02:52.097 Type 'make' to build. 00:02:52.097 19:01:01 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:02:52.097 19:01:01 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:52.097 19:01:01 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:52.097 19:01:01 -- common/autotest_common.sh@10 -- $ set +x 00:02:52.097 ************************************ 00:02:52.097 START TEST make 00:02:52.097 ************************************ 00:02:52.097 19:01:01 make -- common/autotest_common.sh@1129 -- $ make -j10 00:02:52.665 make[1]: Nothing to be done for 'all'. 00:03:04.875 The Meson build system 00:03:04.875 Version: 1.5.0 00:03:04.875 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:03:04.875 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:03:04.875 Build type: native build 00:03:04.875 Program cat found: YES (/usr/bin/cat) 00:03:04.875 Project name: DPDK 00:03:04.875 Project version: 24.03.0 00:03:04.875 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:04.875 C linker for the host machine: cc ld.bfd 2.40-14 00:03:04.875 Host machine cpu family: x86_64 00:03:04.875 Host machine cpu: x86_64 00:03:04.875 Message: ## Building in Developer Mode ## 00:03:04.875 Program pkg-config found: YES (/usr/bin/pkg-config) 00:03:04.875 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:03:04.875 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:03:04.875 Program python3 found: YES (/usr/bin/python3) 00:03:04.875 Program cat found: YES (/usr/bin/cat) 00:03:04.875 Compiler for C supports arguments -march=native: YES 00:03:04.875 Checking for size of "void *" : 8 00:03:04.875 Checking for size of "void *" : 8 (cached) 00:03:04.875 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:03:04.875 Library m found: YES 00:03:04.875 Library numa found: YES 00:03:04.875 Has header "numaif.h" : YES 00:03:04.875 Library fdt found: NO 00:03:04.875 Library execinfo found: NO 00:03:04.875 Has header "execinfo.h" : YES 00:03:04.875 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:04.875 Run-time dependency libarchive found: NO (tried pkgconfig) 00:03:04.875 Run-time dependency libbsd found: NO (tried pkgconfig) 00:03:04.875 Run-time dependency jansson found: NO (tried pkgconfig) 00:03:04.875 Run-time dependency openssl found: YES 3.1.1 00:03:04.875 Run-time dependency libpcap found: YES 1.10.4 00:03:04.875 Has header "pcap.h" with dependency libpcap: YES 00:03:04.875 Compiler for C supports arguments -Wcast-qual: YES 00:03:04.875 Compiler for C supports arguments -Wdeprecated: YES 00:03:04.875 Compiler for C supports arguments -Wformat: YES 00:03:04.875 Compiler for C supports arguments -Wformat-nonliteral: NO 00:03:04.875 Compiler for C supports arguments -Wformat-security: NO 00:03:04.875 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:04.875 Compiler for C supports arguments -Wmissing-prototypes: YES 00:03:04.875 Compiler for C supports arguments -Wnested-externs: YES 00:03:04.875 Compiler for C supports arguments -Wold-style-definition: YES 00:03:04.875 Compiler for C supports arguments -Wpointer-arith: YES 00:03:04.875 Compiler for C supports arguments -Wsign-compare: YES 00:03:04.875 Compiler for C supports arguments -Wstrict-prototypes: YES 00:03:04.875 Compiler for C supports arguments -Wundef: YES 00:03:04.875 Compiler for C supports arguments -Wwrite-strings: YES 00:03:04.875 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:03:04.875 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:03:04.875 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:04.875 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:03:04.875 Program objdump found: YES (/usr/bin/objdump) 00:03:04.875 Compiler for C supports arguments -mavx512f: YES 00:03:04.875 Checking if "AVX512 checking" compiles: YES 00:03:04.875 Fetching value of define "__SSE4_2__" : 1 00:03:04.875 Fetching value of define "__AES__" : 1 00:03:04.875 Fetching value of define "__AVX__" : 1 00:03:04.875 Fetching value of define "__AVX2__" : 1 00:03:04.875 Fetching value of define "__AVX512BW__" : 1 00:03:04.875 Fetching value of define "__AVX512CD__" : 1 00:03:04.875 Fetching value of define "__AVX512DQ__" : 1 00:03:04.875 Fetching value of define "__AVX512F__" : 1 00:03:04.875 Fetching value of define "__AVX512VL__" : 1 00:03:04.875 Fetching value of define "__PCLMUL__" : 1 00:03:04.875 Fetching value of define "__RDRND__" : 1 00:03:04.875 Fetching value of define "__RDSEED__" : 1 00:03:04.875 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:03:04.875 Fetching value of define "__znver1__" : (undefined) 00:03:04.875 Fetching value of define "__znver2__" : (undefined) 00:03:04.875 Fetching value of define "__znver3__" : (undefined) 00:03:04.875 Fetching value of define "__znver4__" : (undefined) 00:03:04.875 Library asan found: YES 00:03:04.875 Compiler for C supports arguments -Wno-format-truncation: YES 00:03:04.875 Message: lib/log: Defining dependency "log" 00:03:04.875 Message: lib/kvargs: Defining dependency "kvargs" 00:03:04.875 Message: lib/telemetry: Defining dependency "telemetry" 00:03:04.875 Library rt found: YES 00:03:04.875 Checking for function "getentropy" : NO 00:03:04.875 Message: lib/eal: Defining dependency "eal" 00:03:04.875 Message: lib/ring: Defining dependency "ring" 00:03:04.875 Message: lib/rcu: Defining dependency "rcu" 00:03:04.875 Message: lib/mempool: Defining dependency "mempool" 00:03:04.875 Message: lib/mbuf: Defining dependency "mbuf" 00:03:04.875 Fetching value of define "__PCLMUL__" : 1 (cached) 00:03:04.875 Fetching value of define "__AVX512F__" : 1 (cached) 00:03:04.875 Fetching value of define "__AVX512BW__" : 1 (cached) 00:03:04.875 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:03:04.875 Fetching value of define "__AVX512VL__" : 1 (cached) 00:03:04.875 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:03:04.875 Compiler for C supports arguments -mpclmul: YES 00:03:04.875 Compiler for C supports arguments -maes: YES 00:03:04.875 Compiler for C supports arguments -mavx512f: YES (cached) 00:03:04.875 Compiler for C supports arguments -mavx512bw: YES 00:03:04.875 Compiler for C supports arguments -mavx512dq: YES 00:03:04.875 Compiler for C supports arguments -mavx512vl: YES 00:03:04.875 Compiler for C supports arguments -mvpclmulqdq: YES 00:03:04.875 Compiler for C supports arguments -mavx2: YES 00:03:04.875 Compiler for C supports arguments -mavx: YES 00:03:04.875 Message: lib/net: Defining dependency "net" 00:03:04.875 Message: lib/meter: Defining dependency "meter" 00:03:04.875 Message: lib/ethdev: Defining dependency "ethdev" 00:03:04.875 Message: lib/pci: Defining dependency "pci" 00:03:04.875 Message: lib/cmdline: Defining dependency "cmdline" 00:03:04.875 Message: lib/hash: Defining dependency "hash" 00:03:04.875 Message: lib/timer: Defining dependency "timer" 00:03:04.875 Message: lib/compressdev: Defining dependency "compressdev" 00:03:04.875 Message: lib/cryptodev: Defining dependency "cryptodev" 00:03:04.875 Message: lib/dmadev: Defining dependency "dmadev" 00:03:04.875 Compiler for C supports arguments -Wno-cast-qual: YES 00:03:04.875 Message: lib/power: Defining dependency "power" 00:03:04.875 Message: lib/reorder: Defining dependency "reorder" 00:03:04.875 Message: lib/security: Defining dependency "security" 00:03:04.875 Has header "linux/userfaultfd.h" : YES 00:03:04.875 Has header "linux/vduse.h" : YES 00:03:04.875 Message: lib/vhost: Defining dependency "vhost" 00:03:04.875 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:03:04.876 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:03:04.876 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:03:04.876 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:03:04.876 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:03:04.876 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:03:04.876 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:03:04.876 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:03:04.876 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:03:04.876 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:03:04.876 Program doxygen found: YES (/usr/local/bin/doxygen) 00:03:04.876 Configuring doxy-api-html.conf using configuration 00:03:04.876 Configuring doxy-api-man.conf using configuration 00:03:04.876 Program mandb found: YES (/usr/bin/mandb) 00:03:04.876 Program sphinx-build found: NO 00:03:04.876 Configuring rte_build_config.h using configuration 00:03:04.876 Message: 00:03:04.876 ================= 00:03:04.876 Applications Enabled 00:03:04.876 ================= 00:03:04.876 00:03:04.876 apps: 00:03:04.876 00:03:04.876 00:03:04.876 Message: 00:03:04.876 ================= 00:03:04.876 Libraries Enabled 00:03:04.876 ================= 00:03:04.876 00:03:04.876 libs: 00:03:04.876 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:03:04.876 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:03:04.876 cryptodev, dmadev, power, reorder, security, vhost, 00:03:04.876 00:03:04.876 Message: 00:03:04.876 =============== 00:03:04.876 Drivers Enabled 00:03:04.876 =============== 00:03:04.876 00:03:04.876 common: 00:03:04.876 00:03:04.876 bus: 00:03:04.876 pci, vdev, 00:03:04.876 mempool: 00:03:04.876 ring, 00:03:04.876 dma: 00:03:04.876 00:03:04.876 net: 00:03:04.876 00:03:04.876 crypto: 00:03:04.876 00:03:04.876 compress: 00:03:04.876 00:03:04.876 vdpa: 00:03:04.876 00:03:04.876 00:03:04.876 Message: 00:03:04.876 ================= 00:03:04.876 Content Skipped 00:03:04.876 ================= 00:03:04.876 00:03:04.876 apps: 00:03:04.876 dumpcap: explicitly disabled via build config 00:03:04.876 graph: explicitly disabled via build config 00:03:04.876 pdump: explicitly disabled via build config 00:03:04.876 proc-info: explicitly disabled via build config 00:03:04.876 test-acl: explicitly disabled via build config 00:03:04.876 test-bbdev: explicitly disabled via build config 00:03:04.876 test-cmdline: explicitly disabled via build config 00:03:04.876 test-compress-perf: explicitly disabled via build config 00:03:04.876 test-crypto-perf: explicitly disabled via build config 00:03:04.876 test-dma-perf: explicitly disabled via build config 00:03:04.876 test-eventdev: explicitly disabled via build config 00:03:04.876 test-fib: explicitly disabled via build config 00:03:04.876 test-flow-perf: explicitly disabled via build config 00:03:04.876 test-gpudev: explicitly disabled via build config 00:03:04.876 test-mldev: explicitly disabled via build config 00:03:04.876 test-pipeline: explicitly disabled via build config 00:03:04.876 test-pmd: explicitly disabled via build config 00:03:04.876 test-regex: explicitly disabled via build config 00:03:04.876 test-sad: explicitly disabled via build config 00:03:04.876 test-security-perf: explicitly disabled via build config 00:03:04.876 00:03:04.876 libs: 00:03:04.876 argparse: explicitly disabled via build config 00:03:04.876 metrics: explicitly disabled via build config 00:03:04.876 acl: explicitly disabled via build config 00:03:04.876 bbdev: explicitly disabled via build config 00:03:04.876 bitratestats: explicitly disabled via build config 00:03:04.876 bpf: explicitly disabled via build config 00:03:04.876 cfgfile: explicitly disabled via build config 00:03:04.876 distributor: explicitly disabled via build config 00:03:04.876 efd: explicitly disabled via build config 00:03:04.876 eventdev: explicitly disabled via build config 00:03:04.876 dispatcher: explicitly disabled via build config 00:03:04.876 gpudev: explicitly disabled via build config 00:03:04.876 gro: explicitly disabled via build config 00:03:04.876 gso: explicitly disabled via build config 00:03:04.876 ip_frag: explicitly disabled via build config 00:03:04.876 jobstats: explicitly disabled via build config 00:03:04.876 latencystats: explicitly disabled via build config 00:03:04.876 lpm: explicitly disabled via build config 00:03:04.876 member: explicitly disabled via build config 00:03:04.876 pcapng: explicitly disabled via build config 00:03:04.876 rawdev: explicitly disabled via build config 00:03:04.876 regexdev: explicitly disabled via build config 00:03:04.876 mldev: explicitly disabled via build config 00:03:04.876 rib: explicitly disabled via build config 00:03:04.876 sched: explicitly disabled via build config 00:03:04.876 stack: explicitly disabled via build config 00:03:04.876 ipsec: explicitly disabled via build config 00:03:04.876 pdcp: explicitly disabled via build config 00:03:04.876 fib: explicitly disabled via build config 00:03:04.876 port: explicitly disabled via build config 00:03:04.876 pdump: explicitly disabled via build config 00:03:04.876 table: explicitly disabled via build config 00:03:04.876 pipeline: explicitly disabled via build config 00:03:04.876 graph: explicitly disabled via build config 00:03:04.876 node: explicitly disabled via build config 00:03:04.876 00:03:04.876 drivers: 00:03:04.876 common/cpt: not in enabled drivers build config 00:03:04.876 common/dpaax: not in enabled drivers build config 00:03:04.876 common/iavf: not in enabled drivers build config 00:03:04.876 common/idpf: not in enabled drivers build config 00:03:04.876 common/ionic: not in enabled drivers build config 00:03:04.876 common/mvep: not in enabled drivers build config 00:03:04.876 common/octeontx: not in enabled drivers build config 00:03:04.876 bus/auxiliary: not in enabled drivers build config 00:03:04.876 bus/cdx: not in enabled drivers build config 00:03:04.876 bus/dpaa: not in enabled drivers build config 00:03:04.876 bus/fslmc: not in enabled drivers build config 00:03:04.876 bus/ifpga: not in enabled drivers build config 00:03:04.876 bus/platform: not in enabled drivers build config 00:03:04.876 bus/uacce: not in enabled drivers build config 00:03:04.876 bus/vmbus: not in enabled drivers build config 00:03:04.876 common/cnxk: not in enabled drivers build config 00:03:04.876 common/mlx5: not in enabled drivers build config 00:03:04.876 common/nfp: not in enabled drivers build config 00:03:04.876 common/nitrox: not in enabled drivers build config 00:03:04.876 common/qat: not in enabled drivers build config 00:03:04.876 common/sfc_efx: not in enabled drivers build config 00:03:04.876 mempool/bucket: not in enabled drivers build config 00:03:04.876 mempool/cnxk: not in enabled drivers build config 00:03:04.876 mempool/dpaa: not in enabled drivers build config 00:03:04.876 mempool/dpaa2: not in enabled drivers build config 00:03:04.876 mempool/octeontx: not in enabled drivers build config 00:03:04.876 mempool/stack: not in enabled drivers build config 00:03:04.876 dma/cnxk: not in enabled drivers build config 00:03:04.876 dma/dpaa: not in enabled drivers build config 00:03:04.876 dma/dpaa2: not in enabled drivers build config 00:03:04.876 dma/hisilicon: not in enabled drivers build config 00:03:04.876 dma/idxd: not in enabled drivers build config 00:03:04.876 dma/ioat: not in enabled drivers build config 00:03:04.876 dma/skeleton: not in enabled drivers build config 00:03:04.876 net/af_packet: not in enabled drivers build config 00:03:04.876 net/af_xdp: not in enabled drivers build config 00:03:04.876 net/ark: not in enabled drivers build config 00:03:04.876 net/atlantic: not in enabled drivers build config 00:03:04.876 net/avp: not in enabled drivers build config 00:03:04.876 net/axgbe: not in enabled drivers build config 00:03:04.876 net/bnx2x: not in enabled drivers build config 00:03:04.876 net/bnxt: not in enabled drivers build config 00:03:04.876 net/bonding: not in enabled drivers build config 00:03:04.876 net/cnxk: not in enabled drivers build config 00:03:04.876 net/cpfl: not in enabled drivers build config 00:03:04.876 net/cxgbe: not in enabled drivers build config 00:03:04.876 net/dpaa: not in enabled drivers build config 00:03:04.876 net/dpaa2: not in enabled drivers build config 00:03:04.876 net/e1000: not in enabled drivers build config 00:03:04.876 net/ena: not in enabled drivers build config 00:03:04.876 net/enetc: not in enabled drivers build config 00:03:04.876 net/enetfec: not in enabled drivers build config 00:03:04.876 net/enic: not in enabled drivers build config 00:03:04.876 net/failsafe: not in enabled drivers build config 00:03:04.876 net/fm10k: not in enabled drivers build config 00:03:04.876 net/gve: not in enabled drivers build config 00:03:04.876 net/hinic: not in enabled drivers build config 00:03:04.876 net/hns3: not in enabled drivers build config 00:03:04.876 net/i40e: not in enabled drivers build config 00:03:04.876 net/iavf: not in enabled drivers build config 00:03:04.876 net/ice: not in enabled drivers build config 00:03:04.876 net/idpf: not in enabled drivers build config 00:03:04.876 net/igc: not in enabled drivers build config 00:03:04.876 net/ionic: not in enabled drivers build config 00:03:04.876 net/ipn3ke: not in enabled drivers build config 00:03:04.876 net/ixgbe: not in enabled drivers build config 00:03:04.876 net/mana: not in enabled drivers build config 00:03:04.876 net/memif: not in enabled drivers build config 00:03:04.876 net/mlx4: not in enabled drivers build config 00:03:04.876 net/mlx5: not in enabled drivers build config 00:03:04.876 net/mvneta: not in enabled drivers build config 00:03:04.876 net/mvpp2: not in enabled drivers build config 00:03:04.876 net/netvsc: not in enabled drivers build config 00:03:04.876 net/nfb: not in enabled drivers build config 00:03:04.876 net/nfp: not in enabled drivers build config 00:03:04.876 net/ngbe: not in enabled drivers build config 00:03:04.876 net/null: not in enabled drivers build config 00:03:04.876 net/octeontx: not in enabled drivers build config 00:03:04.876 net/octeon_ep: not in enabled drivers build config 00:03:04.876 net/pcap: not in enabled drivers build config 00:03:04.876 net/pfe: not in enabled drivers build config 00:03:04.876 net/qede: not in enabled drivers build config 00:03:04.876 net/ring: not in enabled drivers build config 00:03:04.876 net/sfc: not in enabled drivers build config 00:03:04.876 net/softnic: not in enabled drivers build config 00:03:04.877 net/tap: not in enabled drivers build config 00:03:04.877 net/thunderx: not in enabled drivers build config 00:03:04.877 net/txgbe: not in enabled drivers build config 00:03:04.877 net/vdev_netvsc: not in enabled drivers build config 00:03:04.877 net/vhost: not in enabled drivers build config 00:03:04.877 net/virtio: not in enabled drivers build config 00:03:04.877 net/vmxnet3: not in enabled drivers build config 00:03:04.877 raw/*: missing internal dependency, "rawdev" 00:03:04.877 crypto/armv8: not in enabled drivers build config 00:03:04.877 crypto/bcmfs: not in enabled drivers build config 00:03:04.877 crypto/caam_jr: not in enabled drivers build config 00:03:04.877 crypto/ccp: not in enabled drivers build config 00:03:04.877 crypto/cnxk: not in enabled drivers build config 00:03:04.877 crypto/dpaa_sec: not in enabled drivers build config 00:03:04.877 crypto/dpaa2_sec: not in enabled drivers build config 00:03:04.877 crypto/ipsec_mb: not in enabled drivers build config 00:03:04.877 crypto/mlx5: not in enabled drivers build config 00:03:04.877 crypto/mvsam: not in enabled drivers build config 00:03:04.877 crypto/nitrox: not in enabled drivers build config 00:03:04.877 crypto/null: not in enabled drivers build config 00:03:04.877 crypto/octeontx: not in enabled drivers build config 00:03:04.877 crypto/openssl: not in enabled drivers build config 00:03:04.877 crypto/scheduler: not in enabled drivers build config 00:03:04.877 crypto/uadk: not in enabled drivers build config 00:03:04.877 crypto/virtio: not in enabled drivers build config 00:03:04.877 compress/isal: not in enabled drivers build config 00:03:04.877 compress/mlx5: not in enabled drivers build config 00:03:04.877 compress/nitrox: not in enabled drivers build config 00:03:04.877 compress/octeontx: not in enabled drivers build config 00:03:04.877 compress/zlib: not in enabled drivers build config 00:03:04.877 regex/*: missing internal dependency, "regexdev" 00:03:04.877 ml/*: missing internal dependency, "mldev" 00:03:04.877 vdpa/ifc: not in enabled drivers build config 00:03:04.877 vdpa/mlx5: not in enabled drivers build config 00:03:04.877 vdpa/nfp: not in enabled drivers build config 00:03:04.877 vdpa/sfc: not in enabled drivers build config 00:03:04.877 event/*: missing internal dependency, "eventdev" 00:03:04.877 baseband/*: missing internal dependency, "bbdev" 00:03:04.877 gpu/*: missing internal dependency, "gpudev" 00:03:04.877 00:03:04.877 00:03:04.877 Build targets in project: 85 00:03:04.877 00:03:04.877 DPDK 24.03.0 00:03:04.877 00:03:04.877 User defined options 00:03:04.877 buildtype : debug 00:03:04.877 default_library : shared 00:03:04.877 libdir : lib 00:03:04.877 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:03:04.877 b_sanitize : address 00:03:04.877 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:03:04.877 c_link_args : 00:03:04.877 cpu_instruction_set: native 00:03:04.877 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:03:04.877 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:03:04.877 enable_docs : false 00:03:04.877 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:03:04.877 enable_kmods : false 00:03:04.877 max_lcores : 128 00:03:04.877 tests : false 00:03:04.877 00:03:04.877 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:04.877 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:03:04.877 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:03:04.877 [2/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:03:04.877 [3/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:03:04.877 [4/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:03:04.877 [5/268] Linking static target lib/librte_log.a 00:03:04.877 [6/268] Linking static target lib/librte_kvargs.a 00:03:04.877 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:03:04.877 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:03:04.877 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:03:04.877 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:03:04.877 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:03:04.877 [12/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.877 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:03:04.877 [14/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:03:05.136 [15/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:03:05.136 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:03:05.136 [17/268] Linking static target lib/librte_telemetry.a 00:03:05.136 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:03:05.395 [19/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.395 [20/268] Linking target lib/librte_log.so.24.1 00:03:05.653 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:03:05.653 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:03:05.653 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:03:05.653 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:03:05.653 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:03:05.653 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:03:05.653 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:03:05.653 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:03:05.912 [29/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:03:05.913 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:03:05.913 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:03:05.913 [32/268] Linking target lib/librte_kvargs.so.24.1 00:03:05.913 [33/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.913 [34/268] Linking target lib/librte_telemetry.so.24.1 00:03:06.173 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:03:06.173 [36/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:03:06.173 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:03:06.173 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:03:06.173 [39/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:03:06.434 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:03:06.434 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:03:06.434 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:03:06.434 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:03:06.434 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:03:06.434 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:03:06.434 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:03:06.695 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:03:06.695 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:03:06.954 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:03:06.954 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:03:06.954 [51/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:03:06.954 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:03:07.213 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:03:07.213 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:03:07.213 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:03:07.213 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:03:07.213 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:03:07.473 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:03:07.473 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:03:07.473 [60/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:03:07.473 [61/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:03:07.473 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:03:07.732 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:03:07.732 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:03:07.732 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:03:07.732 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:03:07.732 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:03:07.998 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:03:07.998 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:03:08.258 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:03:08.258 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:03:08.258 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:03:08.258 [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:03:08.258 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:03:08.258 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:03:08.517 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:03:08.517 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:03:08.517 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:03:08.517 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:03:08.777 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:03:08.777 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:03:09.037 [82/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:03:09.037 [83/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:03:09.037 [84/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:03:09.037 [85/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:03:09.038 [86/268] Linking static target lib/librte_ring.a 00:03:09.038 [87/268] Linking static target lib/librte_eal.a 00:03:09.297 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:03:09.297 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:03:09.297 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:03:09.558 [91/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:03:09.558 [92/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:03:09.558 [93/268] Linking static target lib/librte_rcu.a 00:03:09.558 [94/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:03:09.558 [95/268] Linking static target lib/librte_mempool.a 00:03:09.558 [96/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:03:09.558 [97/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.558 [98/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:03:09.822 [99/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:03:09.822 [100/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:03:10.082 [101/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:03:10.082 [102/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:03:10.082 [103/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.082 [104/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:03:10.342 [105/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:03:10.342 [106/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:03:10.342 [107/268] Linking static target lib/librte_net.a 00:03:10.342 [108/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:03:10.600 [109/268] Linking static target lib/librte_meter.a 00:03:10.600 [110/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:03:10.600 [111/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:03:10.600 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:03:10.600 [113/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:03:10.600 [114/268] Linking static target lib/librte_mbuf.a 00:03:10.859 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:03:10.859 [116/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.859 [117/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.859 [118/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.118 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:03:11.118 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:03:11.376 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:03:11.376 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:03:11.635 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:03:11.635 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:03:11.635 [125/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:03:11.635 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:03:11.635 [127/268] Linking static target lib/librte_pci.a 00:03:11.635 [128/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.635 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:03:11.894 [130/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:03:11.894 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:03:11.894 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:03:11.894 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:03:11.894 [134/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:03:11.894 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:03:12.153 [136/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:12.153 [137/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:03:12.153 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:03:12.153 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:03:12.153 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:03:12.153 [141/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:03:12.153 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:03:12.153 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:03:12.153 [144/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:03:12.411 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:03:12.411 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:03:12.411 [147/268] Linking static target lib/librte_cmdline.a 00:03:12.411 [148/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:03:12.411 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:03:12.672 [150/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:03:12.672 [151/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:03:12.672 [152/268] Linking static target lib/librte_timer.a 00:03:12.939 [153/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:03:12.939 [154/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:03:13.198 [155/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:03:13.198 [156/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:03:13.198 [157/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:13.198 [158/268] Linking static target lib/librte_compressdev.a 00:03:13.198 [159/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:03:13.198 [160/268] Linking static target lib/librte_hash.a 00:03:13.198 [161/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:03:13.456 [162/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:03:13.456 [163/268] Linking static target lib/librte_ethdev.a 00:03:13.456 [164/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:03:13.456 [165/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:03:13.715 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:13.715 [167/268] Linking static target lib/librte_dmadev.a 00:03:13.715 [168/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:03:13.974 [169/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:13.974 [170/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:03:13.974 [171/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:03:13.974 [172/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:13.974 [173/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:03:14.232 [174/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:14.490 [175/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:14.490 [176/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:14.490 [177/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:14.490 [178/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:03:14.490 [179/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:14.490 [180/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:03:14.490 [181/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:14.490 [182/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:14.491 [183/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:14.749 [184/268] Linking static target lib/librte_cryptodev.a 00:03:15.008 [185/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:15.268 [186/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:15.268 [187/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:15.268 [188/268] Linking static target lib/librte_security.a 00:03:15.268 [189/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:15.268 [190/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:15.268 [191/268] Linking static target lib/librte_reorder.a 00:03:15.268 [192/268] Linking static target lib/librte_power.a 00:03:15.268 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:15.835 [194/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:15.835 [195/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:16.094 [196/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:16.353 [197/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:16.353 [198/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:03:16.353 [199/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:16.353 [200/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:16.353 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:03:16.613 [202/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:16.873 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:16.873 [204/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:16.873 [205/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:16.873 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:17.133 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:17.133 [208/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:17.133 [209/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:17.133 [210/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:17.133 [211/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:17.393 [212/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:17.393 [213/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:17.393 [214/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:17.393 [215/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:17.393 [216/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:17.393 [217/268] Linking static target drivers/librte_bus_vdev.a 00:03:17.393 [218/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:17.393 [219/268] Linking static target drivers/librte_bus_pci.a 00:03:17.393 [220/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:17.393 [221/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:17.653 [222/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:17.653 [223/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:17.653 [224/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:17.653 [225/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:17.653 [226/268] Linking static target drivers/librte_mempool_ring.a 00:03:17.913 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:18.854 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:20.283 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:20.283 [230/268] Linking target lib/librte_eal.so.24.1 00:03:20.283 [231/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:03:20.283 [232/268] Linking target lib/librte_meter.so.24.1 00:03:20.283 [233/268] Linking target lib/librte_dmadev.so.24.1 00:03:20.283 [234/268] Linking target lib/librte_timer.so.24.1 00:03:20.283 [235/268] Linking target lib/librte_pci.so.24.1 00:03:20.283 [236/268] Linking target drivers/librte_bus_vdev.so.24.1 00:03:20.283 [237/268] Linking target lib/librte_ring.so.24.1 00:03:20.283 [238/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:03:20.283 [239/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:03:20.283 [240/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:03:20.542 [241/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:03:20.542 [242/268] Linking target drivers/librte_bus_pci.so.24.1 00:03:20.542 [243/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:03:20.542 [244/268] Linking target lib/librte_mempool.so.24.1 00:03:20.542 [245/268] Linking target lib/librte_rcu.so.24.1 00:03:20.542 [246/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:03:20.542 [247/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:03:20.803 [248/268] Linking target lib/librte_mbuf.so.24.1 00:03:20.803 [249/268] Linking target drivers/librte_mempool_ring.so.24.1 00:03:20.803 [250/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:03:20.803 [251/268] Linking target lib/librte_reorder.so.24.1 00:03:20.803 [252/268] Linking target lib/librte_net.so.24.1 00:03:20.803 [253/268] Linking target lib/librte_compressdev.so.24.1 00:03:20.803 [254/268] Linking target lib/librte_cryptodev.so.24.1 00:03:21.069 [255/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:03:21.069 [256/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:03:21.069 [257/268] Linking target lib/librte_security.so.24.1 00:03:21.069 [258/268] Linking target lib/librte_cmdline.so.24.1 00:03:21.069 [259/268] Linking target lib/librte_hash.so.24.1 00:03:21.069 [260/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:03:22.451 [261/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:22.451 [262/268] Linking target lib/librte_ethdev.so.24.1 00:03:22.451 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:03:22.451 [264/268] Linking target lib/librte_power.so.24.1 00:03:23.020 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:23.020 [266/268] Linking static target lib/librte_vhost.a 00:03:25.564 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:25.564 [268/268] Linking target lib/librte_vhost.so.24.1 00:03:25.564 INFO: autodetecting backend as ninja 00:03:25.564 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:03:47.530 CC lib/ut/ut.o 00:03:47.530 CC lib/log/log.o 00:03:47.530 CC lib/log/log_deprecated.o 00:03:47.530 CC lib/log/log_flags.o 00:03:47.530 CC lib/ut_mock/mock.o 00:03:47.530 LIB libspdk_ut.a 00:03:47.530 LIB libspdk_ut_mock.a 00:03:47.530 LIB libspdk_log.a 00:03:47.530 SO libspdk_ut.so.2.0 00:03:47.530 SO libspdk_ut_mock.so.6.0 00:03:47.530 SO libspdk_log.so.7.1 00:03:47.530 SYMLINK libspdk_ut_mock.so 00:03:47.530 SYMLINK libspdk_ut.so 00:03:47.530 SYMLINK libspdk_log.so 00:03:47.530 CXX lib/trace_parser/trace.o 00:03:47.530 CC lib/dma/dma.o 00:03:47.530 CC lib/util/base64.o 00:03:47.530 CC lib/util/bit_array.o 00:03:47.530 CC lib/util/cpuset.o 00:03:47.530 CC lib/ioat/ioat.o 00:03:47.530 CC lib/util/crc32.o 00:03:47.530 CC lib/util/crc16.o 00:03:47.530 CC lib/util/crc32c.o 00:03:47.530 CC lib/vfio_user/host/vfio_user_pci.o 00:03:47.530 CC lib/vfio_user/host/vfio_user.o 00:03:47.530 CC lib/util/crc32_ieee.o 00:03:47.530 CC lib/util/crc64.o 00:03:47.530 CC lib/util/dif.o 00:03:47.530 LIB libspdk_dma.a 00:03:47.530 CC lib/util/fd.o 00:03:47.530 CC lib/util/fd_group.o 00:03:47.530 SO libspdk_dma.so.5.0 00:03:47.530 CC lib/util/file.o 00:03:47.530 SYMLINK libspdk_dma.so 00:03:47.530 CC lib/util/hexlify.o 00:03:47.530 LIB libspdk_ioat.a 00:03:47.530 CC lib/util/iov.o 00:03:47.530 SO libspdk_ioat.so.7.0 00:03:47.530 CC lib/util/math.o 00:03:47.530 LIB libspdk_vfio_user.a 00:03:47.530 CC lib/util/net.o 00:03:47.530 SO libspdk_vfio_user.so.5.0 00:03:47.530 SYMLINK libspdk_ioat.so 00:03:47.530 CC lib/util/pipe.o 00:03:47.530 SYMLINK libspdk_vfio_user.so 00:03:47.530 CC lib/util/strerror_tls.o 00:03:47.530 CC lib/util/string.o 00:03:47.530 CC lib/util/uuid.o 00:03:47.530 CC lib/util/xor.o 00:03:47.530 CC lib/util/zipf.o 00:03:47.530 CC lib/util/md5.o 00:03:47.530 LIB libspdk_util.a 00:03:47.530 SO libspdk_util.so.10.1 00:03:47.530 LIB libspdk_trace_parser.a 00:03:47.530 SYMLINK libspdk_util.so 00:03:47.530 SO libspdk_trace_parser.so.6.0 00:03:47.530 SYMLINK libspdk_trace_parser.so 00:03:47.530 CC lib/vmd/vmd.o 00:03:47.530 CC lib/vmd/led.o 00:03:47.530 CC lib/idxd/idxd.o 00:03:47.530 CC lib/idxd/idxd_kernel.o 00:03:47.530 CC lib/idxd/idxd_user.o 00:03:47.530 CC lib/conf/conf.o 00:03:47.530 CC lib/env_dpdk/env.o 00:03:47.530 CC lib/env_dpdk/memory.o 00:03:47.530 CC lib/rdma_utils/rdma_utils.o 00:03:47.530 CC lib/json/json_parse.o 00:03:47.530 CC lib/json/json_util.o 00:03:47.530 CC lib/json/json_write.o 00:03:47.530 LIB libspdk_conf.a 00:03:47.530 CC lib/env_dpdk/pci.o 00:03:47.530 CC lib/env_dpdk/init.o 00:03:47.530 SO libspdk_conf.so.6.0 00:03:47.530 LIB libspdk_rdma_utils.a 00:03:47.530 SYMLINK libspdk_conf.so 00:03:47.530 SO libspdk_rdma_utils.so.1.0 00:03:47.530 CC lib/env_dpdk/threads.o 00:03:47.530 CC lib/env_dpdk/pci_ioat.o 00:03:47.530 SYMLINK libspdk_rdma_utils.so 00:03:47.530 CC lib/env_dpdk/pci_virtio.o 00:03:47.530 LIB libspdk_json.a 00:03:47.530 CC lib/env_dpdk/pci_vmd.o 00:03:47.530 CC lib/env_dpdk/pci_idxd.o 00:03:47.530 SO libspdk_json.so.6.0 00:03:47.530 CC lib/env_dpdk/pci_event.o 00:03:47.530 SYMLINK libspdk_json.so 00:03:47.530 CC lib/env_dpdk/sigbus_handler.o 00:03:47.530 CC lib/env_dpdk/pci_dpdk.o 00:03:47.530 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:47.530 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:47.530 LIB libspdk_idxd.a 00:03:47.530 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:47.530 CC lib/rdma_provider/common.o 00:03:47.530 SO libspdk_idxd.so.12.1 00:03:47.530 LIB libspdk_vmd.a 00:03:47.530 CC lib/jsonrpc/jsonrpc_server.o 00:03:47.530 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:47.530 SO libspdk_vmd.so.6.0 00:03:47.530 SYMLINK libspdk_idxd.so 00:03:47.530 CC lib/jsonrpc/jsonrpc_client.o 00:03:47.530 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:47.530 SYMLINK libspdk_vmd.so 00:03:47.789 LIB libspdk_rdma_provider.a 00:03:47.789 SO libspdk_rdma_provider.so.7.0 00:03:47.789 LIB libspdk_jsonrpc.a 00:03:47.789 SYMLINK libspdk_rdma_provider.so 00:03:47.789 SO libspdk_jsonrpc.so.6.0 00:03:48.049 SYMLINK libspdk_jsonrpc.so 00:03:48.308 CC lib/rpc/rpc.o 00:03:48.567 LIB libspdk_env_dpdk.a 00:03:48.567 LIB libspdk_rpc.a 00:03:48.567 SO libspdk_rpc.so.6.0 00:03:48.826 SO libspdk_env_dpdk.so.15.1 00:03:48.826 SYMLINK libspdk_rpc.so 00:03:48.826 SYMLINK libspdk_env_dpdk.so 00:03:49.086 CC lib/trace/trace.o 00:03:49.086 CC lib/trace/trace_rpc.o 00:03:49.086 CC lib/trace/trace_flags.o 00:03:49.086 CC lib/keyring/keyring.o 00:03:49.086 CC lib/keyring/keyring_rpc.o 00:03:49.086 CC lib/notify/notify.o 00:03:49.086 CC lib/notify/notify_rpc.o 00:03:49.345 LIB libspdk_notify.a 00:03:49.345 SO libspdk_notify.so.6.0 00:03:49.345 LIB libspdk_keyring.a 00:03:49.345 LIB libspdk_trace.a 00:03:49.345 SYMLINK libspdk_notify.so 00:03:49.345 SO libspdk_keyring.so.2.0 00:03:49.345 SO libspdk_trace.so.11.0 00:03:49.605 SYMLINK libspdk_keyring.so 00:03:49.605 SYMLINK libspdk_trace.so 00:03:49.864 CC lib/sock/sock_rpc.o 00:03:49.864 CC lib/sock/sock.o 00:03:49.864 CC lib/thread/thread.o 00:03:49.864 CC lib/thread/iobuf.o 00:03:50.431 LIB libspdk_sock.a 00:03:50.431 SO libspdk_sock.so.10.0 00:03:50.431 SYMLINK libspdk_sock.so 00:03:50.998 CC lib/nvme/nvme_ctrlr.o 00:03:50.998 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:50.998 CC lib/nvme/nvme_fabric.o 00:03:50.998 CC lib/nvme/nvme_pcie.o 00:03:50.998 CC lib/nvme/nvme_ns_cmd.o 00:03:50.998 CC lib/nvme/nvme_ns.o 00:03:50.998 CC lib/nvme/nvme_qpair.o 00:03:50.998 CC lib/nvme/nvme_pcie_common.o 00:03:50.998 CC lib/nvme/nvme.o 00:03:51.565 CC lib/nvme/nvme_quirks.o 00:03:51.824 CC lib/nvme/nvme_transport.o 00:03:51.824 CC lib/nvme/nvme_discovery.o 00:03:51.824 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:51.824 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:51.824 LIB libspdk_thread.a 00:03:51.824 CC lib/nvme/nvme_tcp.o 00:03:51.824 SO libspdk_thread.so.11.0 00:03:52.080 CC lib/nvme/nvme_opal.o 00:03:52.080 SYMLINK libspdk_thread.so 00:03:52.080 CC lib/nvme/nvme_io_msg.o 00:03:52.080 CC lib/nvme/nvme_poll_group.o 00:03:52.080 CC lib/nvme/nvme_zns.o 00:03:52.338 CC lib/nvme/nvme_stubs.o 00:03:52.597 CC lib/nvme/nvme_auth.o 00:03:52.597 CC lib/accel/accel.o 00:03:52.597 CC lib/accel/accel_rpc.o 00:03:52.597 CC lib/nvme/nvme_cuse.o 00:03:52.856 CC lib/nvme/nvme_rdma.o 00:03:52.856 CC lib/accel/accel_sw.o 00:03:52.856 CC lib/blob/blobstore.o 00:03:53.115 CC lib/blob/request.o 00:03:53.115 CC lib/init/json_config.o 00:03:53.115 CC lib/blob/zeroes.o 00:03:53.374 CC lib/init/subsystem.o 00:03:53.374 CC lib/blob/blob_bs_dev.o 00:03:53.374 CC lib/init/subsystem_rpc.o 00:03:53.374 CC lib/init/rpc.o 00:03:53.633 LIB libspdk_init.a 00:03:53.633 CC lib/virtio/virtio.o 00:03:53.633 CC lib/virtio/virtio_vhost_user.o 00:03:53.633 SO libspdk_init.so.6.0 00:03:53.633 CC lib/fsdev/fsdev.o 00:03:53.633 CC lib/fsdev/fsdev_io.o 00:03:53.893 CC lib/fsdev/fsdev_rpc.o 00:03:53.893 SYMLINK libspdk_init.so 00:03:53.893 CC lib/virtio/virtio_vfio_user.o 00:03:53.893 LIB libspdk_accel.a 00:03:53.893 SO libspdk_accel.so.16.0 00:03:53.893 CC lib/virtio/virtio_pci.o 00:03:53.893 CC lib/event/app.o 00:03:53.893 SYMLINK libspdk_accel.so 00:03:53.893 CC lib/event/reactor.o 00:03:54.153 CC lib/event/log_rpc.o 00:03:54.153 CC lib/event/app_rpc.o 00:03:54.153 CC lib/event/scheduler_static.o 00:03:54.153 CC lib/bdev/bdev.o 00:03:54.153 CC lib/bdev/bdev_rpc.o 00:03:54.153 LIB libspdk_virtio.a 00:03:54.419 SO libspdk_virtio.so.7.0 00:03:54.419 CC lib/bdev/bdev_zone.o 00:03:54.419 SYMLINK libspdk_virtio.so 00:03:54.419 CC lib/bdev/part.o 00:03:54.419 LIB libspdk_nvme.a 00:03:54.419 CC lib/bdev/scsi_nvme.o 00:03:54.419 LIB libspdk_fsdev.a 00:03:54.419 LIB libspdk_event.a 00:03:54.419 SO libspdk_fsdev.so.2.0 00:03:54.692 SO libspdk_nvme.so.15.0 00:03:54.692 SO libspdk_event.so.14.0 00:03:54.692 SYMLINK libspdk_fsdev.so 00:03:54.692 SYMLINK libspdk_event.so 00:03:54.952 SYMLINK libspdk_nvme.so 00:03:54.952 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:03:55.521 LIB libspdk_fuse_dispatcher.a 00:03:55.521 SO libspdk_fuse_dispatcher.so.1.0 00:03:55.781 SYMLINK libspdk_fuse_dispatcher.so 00:03:57.161 LIB libspdk_blob.a 00:03:57.161 SO libspdk_blob.so.12.0 00:03:57.161 SYMLINK libspdk_blob.so 00:03:57.419 LIB libspdk_bdev.a 00:03:57.419 SO libspdk_bdev.so.17.0 00:03:57.419 CC lib/lvol/lvol.o 00:03:57.419 CC lib/blobfs/blobfs.o 00:03:57.419 CC lib/blobfs/tree.o 00:03:57.678 SYMLINK libspdk_bdev.so 00:03:57.678 CC lib/scsi/dev.o 00:03:57.678 CC lib/scsi/lun.o 00:03:57.678 CC lib/scsi/port.o 00:03:57.678 CC lib/nbd/nbd.o 00:03:57.678 CC lib/nbd/nbd_rpc.o 00:03:57.678 CC lib/ftl/ftl_core.o 00:03:57.678 CC lib/ublk/ublk.o 00:03:57.678 CC lib/nvmf/ctrlr.o 00:03:57.936 CC lib/nvmf/ctrlr_discovery.o 00:03:57.936 CC lib/nvmf/ctrlr_bdev.o 00:03:57.936 CC lib/nvmf/subsystem.o 00:03:58.194 CC lib/scsi/scsi.o 00:03:58.194 LIB libspdk_nbd.a 00:03:58.194 CC lib/scsi/scsi_bdev.o 00:03:58.194 SO libspdk_nbd.so.7.0 00:03:58.194 CC lib/ftl/ftl_init.o 00:03:58.194 SYMLINK libspdk_nbd.so 00:03:58.194 CC lib/ftl/ftl_layout.o 00:03:58.451 CC lib/nvmf/nvmf.o 00:03:58.451 CC lib/ublk/ublk_rpc.o 00:03:58.451 CC lib/nvmf/nvmf_rpc.o 00:03:58.452 LIB libspdk_blobfs.a 00:03:58.710 SO libspdk_blobfs.so.11.0 00:03:58.710 CC lib/ftl/ftl_debug.o 00:03:58.710 LIB libspdk_ublk.a 00:03:58.710 SO libspdk_ublk.so.3.0 00:03:58.710 SYMLINK libspdk_blobfs.so 00:03:58.710 CC lib/scsi/scsi_pr.o 00:03:58.710 LIB libspdk_lvol.a 00:03:58.710 CC lib/scsi/scsi_rpc.o 00:03:58.710 SYMLINK libspdk_ublk.so 00:03:58.710 CC lib/ftl/ftl_io.o 00:03:58.710 CC lib/ftl/ftl_sb.o 00:03:58.710 SO libspdk_lvol.so.11.0 00:03:58.710 SYMLINK libspdk_lvol.so 00:03:58.710 CC lib/ftl/ftl_l2p.o 00:03:58.968 CC lib/ftl/ftl_l2p_flat.o 00:03:58.968 CC lib/ftl/ftl_nv_cache.o 00:03:58.968 CC lib/ftl/ftl_band.o 00:03:58.968 CC lib/ftl/ftl_band_ops.o 00:03:58.968 CC lib/nvmf/transport.o 00:03:58.968 CC lib/ftl/ftl_writer.o 00:03:58.968 CC lib/scsi/task.o 00:03:59.227 CC lib/ftl/ftl_rq.o 00:03:59.227 LIB libspdk_scsi.a 00:03:59.486 CC lib/nvmf/tcp.o 00:03:59.486 CC lib/nvmf/stubs.o 00:03:59.486 SO libspdk_scsi.so.9.0 00:03:59.486 CC lib/ftl/ftl_reloc.o 00:03:59.486 CC lib/nvmf/mdns_server.o 00:03:59.486 SYMLINK libspdk_scsi.so 00:03:59.744 CC lib/ftl/ftl_l2p_cache.o 00:03:59.744 CC lib/vhost/vhost.o 00:03:59.744 CC lib/iscsi/conn.o 00:03:59.744 CC lib/vhost/vhost_rpc.o 00:04:00.002 CC lib/vhost/vhost_scsi.o 00:04:00.002 CC lib/vhost/vhost_blk.o 00:04:00.002 CC lib/vhost/rte_vhost_user.o 00:04:00.002 CC lib/nvmf/rdma.o 00:04:00.002 CC lib/nvmf/auth.o 00:04:00.260 CC lib/ftl/ftl_p2l.o 00:04:00.519 CC lib/ftl/ftl_p2l_log.o 00:04:00.519 CC lib/iscsi/init_grp.o 00:04:00.777 CC lib/ftl/mngt/ftl_mngt.o 00:04:00.777 CC lib/iscsi/iscsi.o 00:04:00.777 CC lib/iscsi/param.o 00:04:00.777 CC lib/iscsi/portal_grp.o 00:04:01.035 CC lib/iscsi/tgt_node.o 00:04:01.035 CC lib/iscsi/iscsi_subsystem.o 00:04:01.035 LIB libspdk_vhost.a 00:04:01.293 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:01.293 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:01.293 CC lib/iscsi/iscsi_rpc.o 00:04:01.293 SO libspdk_vhost.so.8.0 00:04:01.293 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:01.293 SYMLINK libspdk_vhost.so 00:04:01.293 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:01.293 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:01.551 CC lib/iscsi/task.o 00:04:01.551 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:01.551 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:01.551 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:01.551 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:01.551 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:01.811 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:01.811 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:01.811 CC lib/ftl/utils/ftl_conf.o 00:04:01.811 CC lib/ftl/utils/ftl_md.o 00:04:01.811 CC lib/ftl/utils/ftl_mempool.o 00:04:01.811 CC lib/ftl/utils/ftl_bitmap.o 00:04:01.811 CC lib/ftl/utils/ftl_property.o 00:04:01.811 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:02.068 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:02.068 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:02.068 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:02.068 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:02.068 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:02.069 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:02.069 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:02.069 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:02.069 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:02.327 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:02.327 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:04:02.327 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:04:02.327 CC lib/ftl/base/ftl_base_dev.o 00:04:02.327 CC lib/ftl/base/ftl_base_bdev.o 00:04:02.327 CC lib/ftl/ftl_trace.o 00:04:02.588 LIB libspdk_iscsi.a 00:04:02.588 SO libspdk_iscsi.so.8.0 00:04:02.588 LIB libspdk_ftl.a 00:04:02.848 LIB libspdk_nvmf.a 00:04:02.848 SYMLINK libspdk_iscsi.so 00:04:02.848 SO libspdk_ftl.so.9.0 00:04:02.848 SO libspdk_nvmf.so.20.0 00:04:03.107 SYMLINK libspdk_ftl.so 00:04:03.107 SYMLINK libspdk_nvmf.so 00:04:03.673 CC module/env_dpdk/env_dpdk_rpc.o 00:04:03.673 CC module/sock/posix/posix.o 00:04:03.673 CC module/accel/error/accel_error.o 00:04:03.673 CC module/keyring/file/keyring.o 00:04:03.673 CC module/fsdev/aio/fsdev_aio.o 00:04:03.673 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:03.673 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:03.673 CC module/keyring/linux/keyring.o 00:04:03.673 CC module/scheduler/gscheduler/gscheduler.o 00:04:03.673 CC module/blob/bdev/blob_bdev.o 00:04:03.673 LIB libspdk_env_dpdk_rpc.a 00:04:03.931 SO libspdk_env_dpdk_rpc.so.6.0 00:04:03.931 CC module/keyring/linux/keyring_rpc.o 00:04:03.931 SYMLINK libspdk_env_dpdk_rpc.so 00:04:03.931 CC module/keyring/file/keyring_rpc.o 00:04:03.931 CC module/fsdev/aio/fsdev_aio_rpc.o 00:04:03.931 LIB libspdk_scheduler_gscheduler.a 00:04:03.931 LIB libspdk_scheduler_dpdk_governor.a 00:04:03.931 SO libspdk_scheduler_gscheduler.so.4.0 00:04:03.931 LIB libspdk_scheduler_dynamic.a 00:04:03.931 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:03.931 CC module/accel/error/accel_error_rpc.o 00:04:03.931 SO libspdk_scheduler_dynamic.so.4.0 00:04:03.931 SYMLINK libspdk_scheduler_gscheduler.so 00:04:03.931 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:03.931 CC module/fsdev/aio/linux_aio_mgr.o 00:04:03.931 LIB libspdk_keyring_linux.a 00:04:03.931 SYMLINK libspdk_scheduler_dynamic.so 00:04:03.931 LIB libspdk_keyring_file.a 00:04:03.931 LIB libspdk_blob_bdev.a 00:04:03.931 SO libspdk_keyring_file.so.2.0 00:04:03.931 SO libspdk_keyring_linux.so.1.0 00:04:04.189 SO libspdk_blob_bdev.so.12.0 00:04:04.189 LIB libspdk_accel_error.a 00:04:04.189 SYMLINK libspdk_keyring_file.so 00:04:04.189 SYMLINK libspdk_keyring_linux.so 00:04:04.189 SO libspdk_accel_error.so.2.0 00:04:04.189 SYMLINK libspdk_blob_bdev.so 00:04:04.189 CC module/accel/ioat/accel_ioat.o 00:04:04.189 CC module/accel/ioat/accel_ioat_rpc.o 00:04:04.189 CC module/accel/dsa/accel_dsa.o 00:04:04.189 SYMLINK libspdk_accel_error.so 00:04:04.189 CC module/accel/dsa/accel_dsa_rpc.o 00:04:04.189 CC module/accel/iaa/accel_iaa.o 00:04:04.189 CC module/accel/iaa/accel_iaa_rpc.o 00:04:04.446 LIB libspdk_accel_ioat.a 00:04:04.446 SO libspdk_accel_ioat.so.6.0 00:04:04.446 CC module/blobfs/bdev/blobfs_bdev.o 00:04:04.446 LIB libspdk_accel_iaa.a 00:04:04.446 CC module/bdev/delay/vbdev_delay.o 00:04:04.446 SO libspdk_accel_iaa.so.3.0 00:04:04.446 SYMLINK libspdk_accel_ioat.so 00:04:04.446 CC module/bdev/error/vbdev_error.o 00:04:04.446 LIB libspdk_fsdev_aio.a 00:04:04.446 LIB libspdk_accel_dsa.a 00:04:04.704 SO libspdk_accel_dsa.so.5.0 00:04:04.704 SYMLINK libspdk_accel_iaa.so 00:04:04.704 SO libspdk_fsdev_aio.so.1.0 00:04:04.704 CC module/bdev/gpt/gpt.o 00:04:04.704 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:04.704 CC module/bdev/lvol/vbdev_lvol.o 00:04:04.704 SYMLINK libspdk_accel_dsa.so 00:04:04.704 CC module/bdev/gpt/vbdev_gpt.o 00:04:04.704 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:04.704 SYMLINK libspdk_fsdev_aio.so 00:04:04.704 CC module/bdev/error/vbdev_error_rpc.o 00:04:04.704 LIB libspdk_sock_posix.a 00:04:04.704 CC module/bdev/malloc/bdev_malloc.o 00:04:04.704 SO libspdk_sock_posix.so.6.0 00:04:04.704 LIB libspdk_blobfs_bdev.a 00:04:04.704 SO libspdk_blobfs_bdev.so.6.0 00:04:04.704 SYMLINK libspdk_sock_posix.so 00:04:04.704 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:04.704 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:04.962 LIB libspdk_bdev_error.a 00:04:04.962 SYMLINK libspdk_blobfs_bdev.so 00:04:04.962 SO libspdk_bdev_error.so.6.0 00:04:04.962 LIB libspdk_bdev_delay.a 00:04:04.962 LIB libspdk_bdev_gpt.a 00:04:04.962 SO libspdk_bdev_delay.so.6.0 00:04:04.963 SYMLINK libspdk_bdev_error.so 00:04:04.963 CC module/bdev/nvme/bdev_nvme.o 00:04:04.963 CC module/bdev/null/bdev_null.o 00:04:04.963 SO libspdk_bdev_gpt.so.6.0 00:04:04.963 CC module/bdev/passthru/vbdev_passthru.o 00:04:04.963 SYMLINK libspdk_bdev_delay.so 00:04:04.963 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:05.220 SYMLINK libspdk_bdev_gpt.so 00:04:05.220 CC module/bdev/nvme/nvme_rpc.o 00:04:05.220 LIB libspdk_bdev_malloc.a 00:04:05.220 CC module/bdev/raid/bdev_raid.o 00:04:05.220 CC module/bdev/split/vbdev_split.o 00:04:05.220 SO libspdk_bdev_malloc.so.6.0 00:04:05.220 CC module/bdev/raid/bdev_raid_rpc.o 00:04:05.220 LIB libspdk_bdev_lvol.a 00:04:05.220 SYMLINK libspdk_bdev_malloc.so 00:04:05.220 CC module/bdev/raid/bdev_raid_sb.o 00:04:05.220 SO libspdk_bdev_lvol.so.6.0 00:04:05.220 CC module/bdev/null/bdev_null_rpc.o 00:04:05.478 SYMLINK libspdk_bdev_lvol.so 00:04:05.478 CC module/bdev/nvme/bdev_mdns_client.o 00:04:05.478 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:05.478 CC module/bdev/split/vbdev_split_rpc.o 00:04:05.478 CC module/bdev/raid/raid0.o 00:04:05.478 LIB libspdk_bdev_null.a 00:04:05.478 SO libspdk_bdev_null.so.6.0 00:04:05.478 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:05.478 LIB libspdk_bdev_passthru.a 00:04:05.478 LIB libspdk_bdev_split.a 00:04:05.478 SO libspdk_bdev_passthru.so.6.0 00:04:05.478 SYMLINK libspdk_bdev_null.so 00:04:05.736 SO libspdk_bdev_split.so.6.0 00:04:05.736 SYMLINK libspdk_bdev_passthru.so 00:04:05.736 SYMLINK libspdk_bdev_split.so 00:04:05.736 CC module/bdev/raid/raid1.o 00:04:05.736 CC module/bdev/aio/bdev_aio.o 00:04:05.736 CC module/bdev/ftl/bdev_ftl.o 00:04:05.736 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:05.736 CC module/bdev/iscsi/bdev_iscsi.o 00:04:05.994 CC module/bdev/nvme/vbdev_opal.o 00:04:05.994 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:05.994 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:05.994 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:05.994 CC module/bdev/aio/bdev_aio_rpc.o 00:04:05.994 LIB libspdk_bdev_ftl.a 00:04:06.252 CC module/bdev/raid/concat.o 00:04:06.252 SO libspdk_bdev_ftl.so.6.0 00:04:06.252 LIB libspdk_bdev_zone_block.a 00:04:06.252 CC module/bdev/raid/raid5f.o 00:04:06.252 SO libspdk_bdev_zone_block.so.6.0 00:04:06.252 SYMLINK libspdk_bdev_ftl.so 00:04:06.252 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:06.252 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:06.252 SYMLINK libspdk_bdev_zone_block.so 00:04:06.252 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:06.252 LIB libspdk_bdev_aio.a 00:04:06.252 SO libspdk_bdev_aio.so.6.0 00:04:06.252 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:06.252 SYMLINK libspdk_bdev_aio.so 00:04:06.511 LIB libspdk_bdev_iscsi.a 00:04:06.511 SO libspdk_bdev_iscsi.so.6.0 00:04:06.511 SYMLINK libspdk_bdev_iscsi.so 00:04:06.511 LIB libspdk_bdev_virtio.a 00:04:06.770 SO libspdk_bdev_virtio.so.6.0 00:04:06.770 LIB libspdk_bdev_raid.a 00:04:06.771 SYMLINK libspdk_bdev_virtio.so 00:04:06.771 SO libspdk_bdev_raid.so.6.0 00:04:07.030 SYMLINK libspdk_bdev_raid.so 00:04:07.970 LIB libspdk_bdev_nvme.a 00:04:08.230 SO libspdk_bdev_nvme.so.7.1 00:04:08.230 SYMLINK libspdk_bdev_nvme.so 00:04:08.858 CC module/event/subsystems/fsdev/fsdev.o 00:04:08.858 CC module/event/subsystems/vmd/vmd.o 00:04:08.858 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:08.858 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:08.858 CC module/event/subsystems/iobuf/iobuf.o 00:04:08.858 CC module/event/subsystems/scheduler/scheduler.o 00:04:08.858 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:08.858 CC module/event/subsystems/sock/sock.o 00:04:08.858 CC module/event/subsystems/keyring/keyring.o 00:04:08.858 LIB libspdk_event_keyring.a 00:04:08.858 LIB libspdk_event_vhost_blk.a 00:04:08.858 LIB libspdk_event_fsdev.a 00:04:08.858 LIB libspdk_event_sock.a 00:04:08.858 LIB libspdk_event_iobuf.a 00:04:09.118 LIB libspdk_event_scheduler.a 00:04:09.118 LIB libspdk_event_vmd.a 00:04:09.118 SO libspdk_event_keyring.so.1.0 00:04:09.118 SO libspdk_event_vhost_blk.so.3.0 00:04:09.118 SO libspdk_event_fsdev.so.1.0 00:04:09.118 SO libspdk_event_sock.so.5.0 00:04:09.118 SO libspdk_event_scheduler.so.4.0 00:04:09.118 SO libspdk_event_iobuf.so.3.0 00:04:09.118 SO libspdk_event_vmd.so.6.0 00:04:09.118 SYMLINK libspdk_event_keyring.so 00:04:09.118 SYMLINK libspdk_event_vhost_blk.so 00:04:09.118 SYMLINK libspdk_event_fsdev.so 00:04:09.118 SYMLINK libspdk_event_scheduler.so 00:04:09.118 SYMLINK libspdk_event_sock.so 00:04:09.118 SYMLINK libspdk_event_iobuf.so 00:04:09.118 SYMLINK libspdk_event_vmd.so 00:04:09.377 CC module/event/subsystems/accel/accel.o 00:04:09.637 LIB libspdk_event_accel.a 00:04:09.637 SO libspdk_event_accel.so.6.0 00:04:09.637 SYMLINK libspdk_event_accel.so 00:04:10.205 CC module/event/subsystems/bdev/bdev.o 00:04:10.205 LIB libspdk_event_bdev.a 00:04:10.205 SO libspdk_event_bdev.so.6.0 00:04:10.464 SYMLINK libspdk_event_bdev.so 00:04:10.723 CC module/event/subsystems/nbd/nbd.o 00:04:10.723 CC module/event/subsystems/scsi/scsi.o 00:04:10.723 CC module/event/subsystems/ublk/ublk.o 00:04:10.723 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:10.723 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:10.983 LIB libspdk_event_nbd.a 00:04:10.983 SO libspdk_event_nbd.so.6.0 00:04:10.983 LIB libspdk_event_scsi.a 00:04:10.983 LIB libspdk_event_ublk.a 00:04:10.983 SO libspdk_event_ublk.so.3.0 00:04:10.983 SO libspdk_event_scsi.so.6.0 00:04:10.983 SYMLINK libspdk_event_nbd.so 00:04:10.983 SYMLINK libspdk_event_ublk.so 00:04:10.983 SYMLINK libspdk_event_scsi.so 00:04:10.983 LIB libspdk_event_nvmf.a 00:04:10.983 SO libspdk_event_nvmf.so.6.0 00:04:11.242 SYMLINK libspdk_event_nvmf.so 00:04:11.501 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:11.501 CC module/event/subsystems/iscsi/iscsi.o 00:04:11.501 LIB libspdk_event_vhost_scsi.a 00:04:11.501 SO libspdk_event_vhost_scsi.so.3.0 00:04:11.501 LIB libspdk_event_iscsi.a 00:04:11.760 SYMLINK libspdk_event_vhost_scsi.so 00:04:11.760 SO libspdk_event_iscsi.so.6.0 00:04:11.760 SYMLINK libspdk_event_iscsi.so 00:04:12.018 SO libspdk.so.6.0 00:04:12.018 SYMLINK libspdk.so 00:04:12.276 CC test/rpc_client/rpc_client_test.o 00:04:12.276 CXX app/trace/trace.o 00:04:12.276 CC app/trace_record/trace_record.o 00:04:12.276 TEST_HEADER include/spdk/accel.h 00:04:12.276 TEST_HEADER include/spdk/accel_module.h 00:04:12.276 TEST_HEADER include/spdk/assert.h 00:04:12.276 TEST_HEADER include/spdk/barrier.h 00:04:12.276 TEST_HEADER include/spdk/base64.h 00:04:12.276 TEST_HEADER include/spdk/bdev.h 00:04:12.276 TEST_HEADER include/spdk/bdev_module.h 00:04:12.276 TEST_HEADER include/spdk/bdev_zone.h 00:04:12.276 TEST_HEADER include/spdk/bit_array.h 00:04:12.276 TEST_HEADER include/spdk/bit_pool.h 00:04:12.276 TEST_HEADER include/spdk/blob_bdev.h 00:04:12.276 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:12.276 TEST_HEADER include/spdk/blobfs.h 00:04:12.276 TEST_HEADER include/spdk/blob.h 00:04:12.276 CC app/nvmf_tgt/nvmf_main.o 00:04:12.276 TEST_HEADER include/spdk/conf.h 00:04:12.276 TEST_HEADER include/spdk/config.h 00:04:12.276 TEST_HEADER include/spdk/cpuset.h 00:04:12.276 TEST_HEADER include/spdk/crc16.h 00:04:12.276 TEST_HEADER include/spdk/crc32.h 00:04:12.276 TEST_HEADER include/spdk/crc64.h 00:04:12.276 TEST_HEADER include/spdk/dif.h 00:04:12.276 TEST_HEADER include/spdk/dma.h 00:04:12.276 TEST_HEADER include/spdk/endian.h 00:04:12.276 TEST_HEADER include/spdk/env_dpdk.h 00:04:12.276 TEST_HEADER include/spdk/env.h 00:04:12.276 TEST_HEADER include/spdk/event.h 00:04:12.276 TEST_HEADER include/spdk/fd_group.h 00:04:12.276 TEST_HEADER include/spdk/fd.h 00:04:12.276 TEST_HEADER include/spdk/file.h 00:04:12.276 TEST_HEADER include/spdk/fsdev.h 00:04:12.276 TEST_HEADER include/spdk/fsdev_module.h 00:04:12.276 TEST_HEADER include/spdk/ftl.h 00:04:12.276 TEST_HEADER include/spdk/fuse_dispatcher.h 00:04:12.276 TEST_HEADER include/spdk/gpt_spec.h 00:04:12.276 CC examples/util/zipf/zipf.o 00:04:12.276 TEST_HEADER include/spdk/hexlify.h 00:04:12.276 TEST_HEADER include/spdk/histogram_data.h 00:04:12.276 TEST_HEADER include/spdk/idxd.h 00:04:12.276 TEST_HEADER include/spdk/idxd_spec.h 00:04:12.276 TEST_HEADER include/spdk/init.h 00:04:12.276 TEST_HEADER include/spdk/ioat.h 00:04:12.276 TEST_HEADER include/spdk/ioat_spec.h 00:04:12.276 TEST_HEADER include/spdk/iscsi_spec.h 00:04:12.276 CC test/dma/test_dma/test_dma.o 00:04:12.276 TEST_HEADER include/spdk/json.h 00:04:12.276 TEST_HEADER include/spdk/jsonrpc.h 00:04:12.276 TEST_HEADER include/spdk/keyring.h 00:04:12.276 TEST_HEADER include/spdk/keyring_module.h 00:04:12.276 TEST_HEADER include/spdk/likely.h 00:04:12.276 CC test/thread/poller_perf/poller_perf.o 00:04:12.276 TEST_HEADER include/spdk/log.h 00:04:12.276 CC test/app/bdev_svc/bdev_svc.o 00:04:12.276 TEST_HEADER include/spdk/lvol.h 00:04:12.276 TEST_HEADER include/spdk/md5.h 00:04:12.276 TEST_HEADER include/spdk/memory.h 00:04:12.276 TEST_HEADER include/spdk/mmio.h 00:04:12.276 TEST_HEADER include/spdk/nbd.h 00:04:12.276 TEST_HEADER include/spdk/net.h 00:04:12.276 TEST_HEADER include/spdk/notify.h 00:04:12.534 TEST_HEADER include/spdk/nvme.h 00:04:12.534 TEST_HEADER include/spdk/nvme_intel.h 00:04:12.534 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:12.534 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:12.534 TEST_HEADER include/spdk/nvme_spec.h 00:04:12.534 TEST_HEADER include/spdk/nvme_zns.h 00:04:12.534 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:12.534 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:12.534 CC test/env/mem_callbacks/mem_callbacks.o 00:04:12.534 TEST_HEADER include/spdk/nvmf.h 00:04:12.534 TEST_HEADER include/spdk/nvmf_spec.h 00:04:12.534 TEST_HEADER include/spdk/nvmf_transport.h 00:04:12.534 TEST_HEADER include/spdk/opal.h 00:04:12.534 TEST_HEADER include/spdk/opal_spec.h 00:04:12.534 TEST_HEADER include/spdk/pci_ids.h 00:04:12.534 TEST_HEADER include/spdk/pipe.h 00:04:12.534 TEST_HEADER include/spdk/queue.h 00:04:12.534 TEST_HEADER include/spdk/reduce.h 00:04:12.534 TEST_HEADER include/spdk/rpc.h 00:04:12.534 TEST_HEADER include/spdk/scheduler.h 00:04:12.534 TEST_HEADER include/spdk/scsi.h 00:04:12.534 LINK rpc_client_test 00:04:12.534 TEST_HEADER include/spdk/scsi_spec.h 00:04:12.534 TEST_HEADER include/spdk/sock.h 00:04:12.534 TEST_HEADER include/spdk/stdinc.h 00:04:12.534 TEST_HEADER include/spdk/string.h 00:04:12.534 TEST_HEADER include/spdk/thread.h 00:04:12.534 TEST_HEADER include/spdk/trace.h 00:04:12.534 TEST_HEADER include/spdk/trace_parser.h 00:04:12.534 TEST_HEADER include/spdk/tree.h 00:04:12.534 TEST_HEADER include/spdk/ublk.h 00:04:12.534 TEST_HEADER include/spdk/util.h 00:04:12.534 TEST_HEADER include/spdk/uuid.h 00:04:12.534 TEST_HEADER include/spdk/version.h 00:04:12.534 LINK nvmf_tgt 00:04:12.534 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:12.534 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:12.534 TEST_HEADER include/spdk/vhost.h 00:04:12.534 TEST_HEADER include/spdk/vmd.h 00:04:12.534 TEST_HEADER include/spdk/xor.h 00:04:12.534 TEST_HEADER include/spdk/zipf.h 00:04:12.534 CXX test/cpp_headers/accel.o 00:04:12.534 LINK zipf 00:04:12.534 LINK spdk_trace_record 00:04:12.534 LINK poller_perf 00:04:12.534 LINK bdev_svc 00:04:12.792 LINK spdk_trace 00:04:12.792 CXX test/cpp_headers/accel_module.o 00:04:12.792 CC test/env/vtophys/vtophys.o 00:04:12.792 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:12.792 CC test/env/memory/memory_ut.o 00:04:12.792 CC test/env/pci/pci_ut.o 00:04:12.792 CXX test/cpp_headers/assert.o 00:04:12.792 CC examples/ioat/perf/perf.o 00:04:13.050 LINK test_dma 00:04:13.050 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:13.050 LINK vtophys 00:04:13.050 CC app/iscsi_tgt/iscsi_tgt.o 00:04:13.050 LINK env_dpdk_post_init 00:04:13.050 CXX test/cpp_headers/barrier.o 00:04:13.050 LINK mem_callbacks 00:04:13.050 LINK ioat_perf 00:04:13.309 CXX test/cpp_headers/base64.o 00:04:13.309 CC examples/ioat/verify/verify.o 00:04:13.309 LINK iscsi_tgt 00:04:13.309 CC test/app/histogram_perf/histogram_perf.o 00:04:13.309 LINK pci_ut 00:04:13.309 CXX test/cpp_headers/bdev.o 00:04:13.309 CC test/event/event_perf/event_perf.o 00:04:13.309 LINK histogram_perf 00:04:13.309 CC examples/vmd/lsvmd/lsvmd.o 00:04:13.566 CC examples/idxd/perf/perf.o 00:04:13.566 LINK verify 00:04:13.566 LINK nvme_fuzz 00:04:13.566 CXX test/cpp_headers/bdev_module.o 00:04:13.566 LINK lsvmd 00:04:13.566 LINK event_perf 00:04:13.566 CC app/spdk_tgt/spdk_tgt.o 00:04:13.823 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:13.823 CXX test/cpp_headers/bdev_zone.o 00:04:13.823 CC app/spdk_lspci/spdk_lspci.o 00:04:13.823 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:13.823 LINK spdk_tgt 00:04:13.823 CC test/event/reactor/reactor.o 00:04:13.823 CC examples/vmd/led/led.o 00:04:13.823 CC examples/thread/thread/thread_ex.o 00:04:13.823 LINK idxd_perf 00:04:13.823 LINK spdk_lspci 00:04:14.080 LINK interrupt_tgt 00:04:14.080 CXX test/cpp_headers/bit_array.o 00:04:14.080 LINK reactor 00:04:14.080 LINK led 00:04:14.080 CXX test/cpp_headers/bit_pool.o 00:04:14.080 LINK thread 00:04:14.080 CC app/spdk_nvme_perf/perf.o 00:04:14.080 CXX test/cpp_headers/blob_bdev.o 00:04:14.080 CXX test/cpp_headers/blobfs_bdev.o 00:04:14.336 CC app/spdk_nvme_discover/discovery_aer.o 00:04:14.336 CC app/spdk_nvme_identify/identify.o 00:04:14.336 CC test/event/reactor_perf/reactor_perf.o 00:04:14.336 LINK memory_ut 00:04:14.336 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:14.336 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:14.336 CXX test/cpp_headers/blobfs.o 00:04:14.336 LINK reactor_perf 00:04:14.336 CXX test/cpp_headers/blob.o 00:04:14.593 LINK spdk_nvme_discover 00:04:14.593 CXX test/cpp_headers/conf.o 00:04:14.593 CC examples/sock/hello_world/hello_sock.o 00:04:14.593 CC test/event/app_repeat/app_repeat.o 00:04:14.593 CC app/spdk_top/spdk_top.o 00:04:14.850 CXX test/cpp_headers/config.o 00:04:14.850 CXX test/cpp_headers/cpuset.o 00:04:14.850 CC test/accel/dif/dif.o 00:04:14.850 LINK app_repeat 00:04:14.850 LINK vhost_fuzz 00:04:14.850 LINK hello_sock 00:04:14.850 CC test/blobfs/mkfs/mkfs.o 00:04:14.850 CXX test/cpp_headers/crc16.o 00:04:15.108 CXX test/cpp_headers/crc32.o 00:04:15.108 CXX test/cpp_headers/crc64.o 00:04:15.108 LINK mkfs 00:04:15.108 LINK spdk_nvme_perf 00:04:15.108 CC test/event/scheduler/scheduler.o 00:04:15.366 CC examples/fsdev/hello_world/hello_fsdev.o 00:04:15.366 CXX test/cpp_headers/dif.o 00:04:15.366 LINK spdk_nvme_identify 00:04:15.366 CC test/app/jsoncat/jsoncat.o 00:04:15.366 LINK scheduler 00:04:15.366 CXX test/cpp_headers/dma.o 00:04:15.366 CC app/vhost/vhost.o 00:04:15.697 LINK jsoncat 00:04:15.697 CXX test/cpp_headers/endian.o 00:04:15.697 LINK hello_fsdev 00:04:15.697 CC examples/accel/perf/accel_perf.o 00:04:15.697 LINK dif 00:04:15.697 CXX test/cpp_headers/env_dpdk.o 00:04:15.697 LINK vhost 00:04:15.955 LINK spdk_top 00:04:15.955 LINK iscsi_fuzz 00:04:15.955 CC examples/nvme/hello_world/hello_world.o 00:04:15.955 CXX test/cpp_headers/env.o 00:04:15.955 CC examples/blob/hello_world/hello_blob.o 00:04:15.955 CC examples/blob/cli/blobcli.o 00:04:15.955 CC examples/nvme/reconnect/reconnect.o 00:04:15.955 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:15.955 CXX test/cpp_headers/event.o 00:04:15.955 CC examples/nvme/arbitration/arbitration.o 00:04:16.212 LINK hello_world 00:04:16.212 LINK hello_blob 00:04:16.212 LINK accel_perf 00:04:16.212 CC app/spdk_dd/spdk_dd.o 00:04:16.212 CXX test/cpp_headers/fd_group.o 00:04:16.212 CC test/app/stub/stub.o 00:04:16.212 LINK reconnect 00:04:16.212 CXX test/cpp_headers/fd.o 00:04:16.469 CXX test/cpp_headers/file.o 00:04:16.469 CXX test/cpp_headers/fsdev.o 00:04:16.469 LINK stub 00:04:16.469 LINK blobcli 00:04:16.469 LINK arbitration 00:04:16.469 CXX test/cpp_headers/fsdev_module.o 00:04:16.469 CC examples/nvme/hotplug/hotplug.o 00:04:16.469 CC examples/bdev/hello_world/hello_bdev.o 00:04:16.726 LINK spdk_dd 00:04:16.726 LINK nvme_manage 00:04:16.726 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:16.726 CXX test/cpp_headers/ftl.o 00:04:16.726 CC examples/bdev/bdevperf/bdevperf.o 00:04:16.726 CC examples/nvme/abort/abort.o 00:04:16.726 LINK hotplug 00:04:16.726 LINK cmb_copy 00:04:16.726 CC app/fio/nvme/fio_plugin.o 00:04:16.726 LINK hello_bdev 00:04:16.726 CXX test/cpp_headers/fuse_dispatcher.o 00:04:16.984 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:16.984 CC test/lvol/esnap/esnap.o 00:04:16.984 CC app/fio/bdev/fio_plugin.o 00:04:16.984 CXX test/cpp_headers/gpt_spec.o 00:04:16.984 CXX test/cpp_headers/hexlify.o 00:04:16.984 CXX test/cpp_headers/histogram_data.o 00:04:16.984 CXX test/cpp_headers/idxd.o 00:04:16.984 LINK pmr_persistence 00:04:17.242 CXX test/cpp_headers/idxd_spec.o 00:04:17.242 LINK abort 00:04:17.242 CXX test/cpp_headers/init.o 00:04:17.242 CXX test/cpp_headers/ioat.o 00:04:17.500 CXX test/cpp_headers/ioat_spec.o 00:04:17.500 CXX test/cpp_headers/iscsi_spec.o 00:04:17.500 CXX test/cpp_headers/json.o 00:04:17.500 CC test/nvme/aer/aer.o 00:04:17.500 CC test/nvme/reset/reset.o 00:04:17.500 LINK spdk_nvme 00:04:17.500 CC test/bdev/bdevio/bdevio.o 00:04:17.500 CXX test/cpp_headers/jsonrpc.o 00:04:17.500 CXX test/cpp_headers/keyring.o 00:04:17.500 LINK spdk_bdev 00:04:17.759 CXX test/cpp_headers/keyring_module.o 00:04:17.759 CC test/nvme/sgl/sgl.o 00:04:17.759 LINK reset 00:04:17.759 LINK bdevperf 00:04:17.759 CXX test/cpp_headers/likely.o 00:04:17.759 LINK aer 00:04:17.759 CC test/nvme/overhead/overhead.o 00:04:17.759 CC test/nvme/e2edp/nvme_dp.o 00:04:18.017 CC test/nvme/err_injection/err_injection.o 00:04:18.017 CXX test/cpp_headers/log.o 00:04:18.017 CC test/nvme/startup/startup.o 00:04:18.017 LINK sgl 00:04:18.017 LINK bdevio 00:04:18.017 LINK err_injection 00:04:18.017 CXX test/cpp_headers/lvol.o 00:04:18.276 CC test/nvme/reserve/reserve.o 00:04:18.276 LINK overhead 00:04:18.276 LINK nvme_dp 00:04:18.276 LINK startup 00:04:18.276 CXX test/cpp_headers/md5.o 00:04:18.276 CXX test/cpp_headers/memory.o 00:04:18.276 CC examples/nvmf/nvmf/nvmf.o 00:04:18.276 CC test/nvme/simple_copy/simple_copy.o 00:04:18.276 CC test/nvme/connect_stress/connect_stress.o 00:04:18.276 LINK reserve 00:04:18.534 CXX test/cpp_headers/mmio.o 00:04:18.534 CC test/nvme/boot_partition/boot_partition.o 00:04:18.534 CC test/nvme/compliance/nvme_compliance.o 00:04:18.534 LINK connect_stress 00:04:18.534 LINK simple_copy 00:04:18.534 CC test/nvme/fused_ordering/fused_ordering.o 00:04:18.534 CXX test/cpp_headers/nbd.o 00:04:18.534 LINK nvmf 00:04:18.534 LINK boot_partition 00:04:18.534 CXX test/cpp_headers/net.o 00:04:18.534 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:18.792 CC test/nvme/fdp/fdp.o 00:04:18.792 CXX test/cpp_headers/notify.o 00:04:18.792 LINK fused_ordering 00:04:18.792 CC test/nvme/cuse/cuse.o 00:04:18.792 CXX test/cpp_headers/nvme.o 00:04:18.792 CXX test/cpp_headers/nvme_intel.o 00:04:18.792 LINK doorbell_aers 00:04:18.792 CXX test/cpp_headers/nvme_ocssd.o 00:04:18.792 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:18.792 LINK nvme_compliance 00:04:19.050 CXX test/cpp_headers/nvme_spec.o 00:04:19.050 CXX test/cpp_headers/nvme_zns.o 00:04:19.050 CXX test/cpp_headers/nvmf_cmd.o 00:04:19.050 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:19.050 CXX test/cpp_headers/nvmf.o 00:04:19.050 LINK fdp 00:04:19.050 CXX test/cpp_headers/nvmf_spec.o 00:04:19.050 CXX test/cpp_headers/nvmf_transport.o 00:04:19.050 CXX test/cpp_headers/opal.o 00:04:19.050 CXX test/cpp_headers/opal_spec.o 00:04:19.308 CXX test/cpp_headers/pci_ids.o 00:04:19.308 CXX test/cpp_headers/pipe.o 00:04:19.308 CXX test/cpp_headers/queue.o 00:04:19.308 CXX test/cpp_headers/reduce.o 00:04:19.308 CXX test/cpp_headers/rpc.o 00:04:19.308 CXX test/cpp_headers/scheduler.o 00:04:19.308 CXX test/cpp_headers/scsi.o 00:04:19.308 CXX test/cpp_headers/scsi_spec.o 00:04:19.308 CXX test/cpp_headers/sock.o 00:04:19.308 CXX test/cpp_headers/stdinc.o 00:04:19.308 CXX test/cpp_headers/string.o 00:04:19.308 CXX test/cpp_headers/thread.o 00:04:19.566 CXX test/cpp_headers/trace.o 00:04:19.566 CXX test/cpp_headers/trace_parser.o 00:04:19.566 CXX test/cpp_headers/tree.o 00:04:19.566 CXX test/cpp_headers/ublk.o 00:04:19.566 CXX test/cpp_headers/util.o 00:04:19.566 CXX test/cpp_headers/uuid.o 00:04:19.566 CXX test/cpp_headers/version.o 00:04:19.566 CXX test/cpp_headers/vfio_user_pci.o 00:04:19.566 CXX test/cpp_headers/vfio_user_spec.o 00:04:19.566 CXX test/cpp_headers/vhost.o 00:04:19.566 CXX test/cpp_headers/vmd.o 00:04:19.566 CXX test/cpp_headers/xor.o 00:04:19.566 CXX test/cpp_headers/zipf.o 00:04:20.133 LINK cuse 00:04:22.674 LINK esnap 00:04:22.933 00:04:22.933 real 1m30.908s 00:04:22.933 user 7m58.097s 00:04:22.933 sys 1m46.175s 00:04:22.933 19:02:32 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:04:22.933 19:02:32 make -- common/autotest_common.sh@10 -- $ set +x 00:04:22.933 ************************************ 00:04:22.933 END TEST make 00:04:22.933 ************************************ 00:04:22.933 19:02:32 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:22.933 19:02:32 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:22.933 19:02:32 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:22.933 19:02:32 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:22.933 19:02:32 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:04:22.933 19:02:32 -- pm/common@44 -- $ pid=5466 00:04:22.933 19:02:32 -- pm/common@50 -- $ kill -TERM 5466 00:04:22.933 19:02:32 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:22.933 19:02:32 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:04:22.933 19:02:32 -- pm/common@44 -- $ pid=5468 00:04:22.933 19:02:32 -- pm/common@50 -- $ kill -TERM 5468 00:04:22.933 19:02:32 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:04:22.933 19:02:32 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:04:23.195 19:02:32 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:23.195 19:02:32 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:23.195 19:02:32 -- common/autotest_common.sh@1693 -- # lcov --version 00:04:23.195 19:02:32 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:23.195 19:02:32 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:23.195 19:02:32 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:23.195 19:02:32 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:23.195 19:02:32 -- scripts/common.sh@336 -- # IFS=.-: 00:04:23.195 19:02:32 -- scripts/common.sh@336 -- # read -ra ver1 00:04:23.195 19:02:32 -- scripts/common.sh@337 -- # IFS=.-: 00:04:23.195 19:02:32 -- scripts/common.sh@337 -- # read -ra ver2 00:04:23.195 19:02:32 -- scripts/common.sh@338 -- # local 'op=<' 00:04:23.195 19:02:32 -- scripts/common.sh@340 -- # ver1_l=2 00:04:23.195 19:02:32 -- scripts/common.sh@341 -- # ver2_l=1 00:04:23.195 19:02:32 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:23.195 19:02:32 -- scripts/common.sh@344 -- # case "$op" in 00:04:23.195 19:02:32 -- scripts/common.sh@345 -- # : 1 00:04:23.195 19:02:32 -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:23.195 19:02:32 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:23.195 19:02:32 -- scripts/common.sh@365 -- # decimal 1 00:04:23.195 19:02:32 -- scripts/common.sh@353 -- # local d=1 00:04:23.195 19:02:32 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:23.195 19:02:32 -- scripts/common.sh@355 -- # echo 1 00:04:23.195 19:02:32 -- scripts/common.sh@365 -- # ver1[v]=1 00:04:23.195 19:02:32 -- scripts/common.sh@366 -- # decimal 2 00:04:23.195 19:02:32 -- scripts/common.sh@353 -- # local d=2 00:04:23.195 19:02:32 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:23.195 19:02:32 -- scripts/common.sh@355 -- # echo 2 00:04:23.195 19:02:32 -- scripts/common.sh@366 -- # ver2[v]=2 00:04:23.195 19:02:32 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:23.195 19:02:32 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:23.195 19:02:32 -- scripts/common.sh@368 -- # return 0 00:04:23.195 19:02:32 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:23.195 19:02:32 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:23.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.195 --rc genhtml_branch_coverage=1 00:04:23.195 --rc genhtml_function_coverage=1 00:04:23.195 --rc genhtml_legend=1 00:04:23.195 --rc geninfo_all_blocks=1 00:04:23.195 --rc geninfo_unexecuted_blocks=1 00:04:23.195 00:04:23.195 ' 00:04:23.195 19:02:32 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:23.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.195 --rc genhtml_branch_coverage=1 00:04:23.195 --rc genhtml_function_coverage=1 00:04:23.195 --rc genhtml_legend=1 00:04:23.195 --rc geninfo_all_blocks=1 00:04:23.195 --rc geninfo_unexecuted_blocks=1 00:04:23.195 00:04:23.195 ' 00:04:23.195 19:02:32 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:23.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.195 --rc genhtml_branch_coverage=1 00:04:23.195 --rc genhtml_function_coverage=1 00:04:23.195 --rc genhtml_legend=1 00:04:23.195 --rc geninfo_all_blocks=1 00:04:23.195 --rc geninfo_unexecuted_blocks=1 00:04:23.195 00:04:23.195 ' 00:04:23.195 19:02:32 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:23.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.195 --rc genhtml_branch_coverage=1 00:04:23.195 --rc genhtml_function_coverage=1 00:04:23.195 --rc genhtml_legend=1 00:04:23.195 --rc geninfo_all_blocks=1 00:04:23.195 --rc geninfo_unexecuted_blocks=1 00:04:23.195 00:04:23.195 ' 00:04:23.195 19:02:32 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:23.195 19:02:32 -- nvmf/common.sh@7 -- # uname -s 00:04:23.195 19:02:32 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:23.195 19:02:32 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:23.195 19:02:32 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:23.195 19:02:32 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:23.195 19:02:32 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:23.195 19:02:32 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:23.195 19:02:32 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:23.195 19:02:32 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:23.195 19:02:32 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:23.195 19:02:32 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:23.195 19:02:32 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f596d6fb-0518-4483-83ba-bd5f5a3cc19e 00:04:23.195 19:02:32 -- nvmf/common.sh@18 -- # NVME_HOSTID=f596d6fb-0518-4483-83ba-bd5f5a3cc19e 00:04:23.195 19:02:32 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:23.195 19:02:32 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:23.195 19:02:32 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:23.195 19:02:32 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:23.195 19:02:32 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:23.195 19:02:32 -- scripts/common.sh@15 -- # shopt -s extglob 00:04:23.195 19:02:32 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:23.195 19:02:32 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:23.195 19:02:32 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:23.195 19:02:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:23.195 19:02:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:23.195 19:02:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:23.195 19:02:32 -- paths/export.sh@5 -- # export PATH 00:04:23.195 19:02:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:23.195 19:02:32 -- nvmf/common.sh@51 -- # : 0 00:04:23.195 19:02:32 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:23.195 19:02:32 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:23.195 19:02:32 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:23.195 19:02:32 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:23.195 19:02:32 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:23.195 19:02:32 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:23.195 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:23.195 19:02:32 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:23.195 19:02:32 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:23.195 19:02:32 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:23.195 19:02:32 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:23.195 19:02:32 -- spdk/autotest.sh@32 -- # uname -s 00:04:23.195 19:02:32 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:23.195 19:02:32 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:23.195 19:02:32 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:23.471 19:02:32 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:23.471 19:02:32 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:23.471 19:02:32 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:23.471 19:02:32 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:23.471 19:02:32 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:23.471 19:02:32 -- spdk/autotest.sh@48 -- # udevadm_pid=54501 00:04:23.471 19:02:32 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:23.471 19:02:32 -- pm/common@17 -- # local monitor 00:04:23.471 19:02:32 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:23.471 19:02:32 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:23.471 19:02:32 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:23.471 19:02:32 -- pm/common@21 -- # date +%s 00:04:23.471 19:02:32 -- pm/common@21 -- # date +%s 00:04:23.471 19:02:32 -- pm/common@25 -- # sleep 1 00:04:23.471 19:02:32 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732734152 00:04:23.471 19:02:32 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732734152 00:04:23.471 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732734152_collect-vmstat.pm.log 00:04:23.471 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732734152_collect-cpu-load.pm.log 00:04:24.426 19:02:33 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:24.427 19:02:33 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:24.427 19:02:33 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:24.427 19:02:33 -- common/autotest_common.sh@10 -- # set +x 00:04:24.427 19:02:33 -- spdk/autotest.sh@59 -- # create_test_list 00:04:24.427 19:02:33 -- common/autotest_common.sh@752 -- # xtrace_disable 00:04:24.427 19:02:33 -- common/autotest_common.sh@10 -- # set +x 00:04:24.427 19:02:33 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:24.427 19:02:33 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:24.427 19:02:33 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:04:24.427 19:02:33 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:24.427 19:02:33 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:04:24.427 19:02:33 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:24.427 19:02:33 -- common/autotest_common.sh@1457 -- # uname 00:04:24.427 19:02:33 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:04:24.427 19:02:33 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:24.427 19:02:33 -- common/autotest_common.sh@1477 -- # uname 00:04:24.427 19:02:34 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:04:24.427 19:02:34 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:04:24.427 19:02:34 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:04:24.686 lcov: LCOV version 1.15 00:04:24.686 19:02:34 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:39.580 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:39.580 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:04:54.477 19:03:03 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:04:54.477 19:03:03 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:54.477 19:03:03 -- common/autotest_common.sh@10 -- # set +x 00:04:54.477 19:03:03 -- spdk/autotest.sh@78 -- # rm -f 00:04:54.477 19:03:03 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:54.738 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:54.738 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:04:54.738 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:04:54.738 19:03:04 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:04:54.738 19:03:04 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:04:54.738 19:03:04 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:04:54.738 19:03:04 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:04:54.738 19:03:04 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:54.738 19:03:04 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:04:54.738 19:03:04 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:04:54.738 19:03:04 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:54.738 19:03:04 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:54.738 19:03:04 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:54.738 19:03:04 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:04:54.738 19:03:04 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:04:54.738 19:03:04 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:54.738 19:03:04 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:54.738 19:03:04 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:54.738 19:03:04 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n2 00:04:54.738 19:03:04 -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:04:54.738 19:03:04 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:04:54.738 19:03:04 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:54.738 19:03:04 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:54.738 19:03:04 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n3 00:04:54.738 19:03:04 -- common/autotest_common.sh@1650 -- # local device=nvme1n3 00:04:54.738 19:03:04 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:04:54.738 19:03:04 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:54.738 19:03:04 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:04:54.738 19:03:04 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:54.738 19:03:04 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:54.738 19:03:04 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:04:54.738 19:03:04 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:04:54.738 19:03:04 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:54.738 No valid GPT data, bailing 00:04:54.738 19:03:04 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:54.738 19:03:04 -- scripts/common.sh@394 -- # pt= 00:04:54.738 19:03:04 -- scripts/common.sh@395 -- # return 1 00:04:54.738 19:03:04 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:54.738 1+0 records in 00:04:54.738 1+0 records out 00:04:54.738 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00539347 s, 194 MB/s 00:04:54.738 19:03:04 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:54.738 19:03:04 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:54.738 19:03:04 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:04:54.738 19:03:04 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:04:54.738 19:03:04 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:04:54.999 No valid GPT data, bailing 00:04:54.999 19:03:04 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:54.999 19:03:04 -- scripts/common.sh@394 -- # pt= 00:04:54.999 19:03:04 -- scripts/common.sh@395 -- # return 1 00:04:54.999 19:03:04 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:04:54.999 1+0 records in 00:04:54.999 1+0 records out 00:04:54.999 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00686927 s, 153 MB/s 00:04:54.999 19:03:04 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:54.999 19:03:04 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:54.999 19:03:04 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:04:54.999 19:03:04 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:04:54.999 19:03:04 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:04:54.999 No valid GPT data, bailing 00:04:54.999 19:03:04 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:04:54.999 19:03:04 -- scripts/common.sh@394 -- # pt= 00:04:54.999 19:03:04 -- scripts/common.sh@395 -- # return 1 00:04:54.999 19:03:04 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:04:54.999 1+0 records in 00:04:54.999 1+0 records out 00:04:54.999 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00620426 s, 169 MB/s 00:04:54.999 19:03:04 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:54.999 19:03:04 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:54.999 19:03:04 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:04:54.999 19:03:04 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:04:54.999 19:03:04 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:04:54.999 No valid GPT data, bailing 00:04:54.999 19:03:04 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:04:54.999 19:03:04 -- scripts/common.sh@394 -- # pt= 00:04:54.999 19:03:04 -- scripts/common.sh@395 -- # return 1 00:04:54.999 19:03:04 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:04:54.999 1+0 records in 00:04:54.999 1+0 records out 00:04:54.999 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00398433 s, 263 MB/s 00:04:54.999 19:03:04 -- spdk/autotest.sh@105 -- # sync 00:04:55.260 19:03:04 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:55.260 19:03:04 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:55.260 19:03:04 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:58.559 19:03:07 -- spdk/autotest.sh@111 -- # uname -s 00:04:58.559 19:03:07 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:04:58.559 19:03:07 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:04:58.559 19:03:07 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:58.818 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:58.818 Hugepages 00:04:58.818 node hugesize free / total 00:04:58.818 node0 1048576kB 0 / 0 00:04:58.818 node0 2048kB 0 / 0 00:04:58.818 00:04:58.818 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:58.818 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:59.079 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:04:59.079 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:04:59.079 19:03:08 -- spdk/autotest.sh@117 -- # uname -s 00:04:59.079 19:03:08 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:59.079 19:03:08 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:59.079 19:03:08 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:00.020 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:00.020 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:00.020 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:00.280 19:03:09 -- common/autotest_common.sh@1517 -- # sleep 1 00:05:01.221 19:03:10 -- common/autotest_common.sh@1518 -- # bdfs=() 00:05:01.221 19:03:10 -- common/autotest_common.sh@1518 -- # local bdfs 00:05:01.221 19:03:10 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:05:01.221 19:03:10 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:05:01.221 19:03:10 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:01.221 19:03:10 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:01.221 19:03:10 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:01.221 19:03:10 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:01.221 19:03:10 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:01.221 19:03:10 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:05:01.221 19:03:10 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:05:01.221 19:03:10 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:01.791 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:01.791 Waiting for block devices as requested 00:05:01.791 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:05:02.051 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:05:02.051 19:03:11 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:02.051 19:03:11 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:05:02.051 19:03:11 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:05:02.051 19:03:11 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:02.051 19:03:11 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:02.051 19:03:11 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:05:02.051 19:03:11 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:02.051 19:03:11 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:05:02.051 19:03:11 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:05:02.051 19:03:11 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:05:02.051 19:03:11 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:05:02.051 19:03:11 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:02.051 19:03:11 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:02.051 19:03:11 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:05:02.051 19:03:11 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:02.051 19:03:11 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:02.051 19:03:11 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:05:02.051 19:03:11 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:02.051 19:03:11 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:02.051 19:03:11 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:02.051 19:03:11 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:02.051 19:03:11 -- common/autotest_common.sh@1543 -- # continue 00:05:02.051 19:03:11 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:02.051 19:03:11 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:05:02.051 19:03:11 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:02.051 19:03:11 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:05:02.051 19:03:11 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:02.051 19:03:11 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:05:02.051 19:03:11 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:02.051 19:03:11 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:05:02.051 19:03:11 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:05:02.051 19:03:11 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:05:02.051 19:03:11 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:05:02.051 19:03:11 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:02.051 19:03:11 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:02.051 19:03:11 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:05:02.051 19:03:11 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:02.051 19:03:11 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:02.051 19:03:11 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:05:02.051 19:03:11 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:02.051 19:03:11 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:02.051 19:03:11 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:02.051 19:03:11 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:02.051 19:03:11 -- common/autotest_common.sh@1543 -- # continue 00:05:02.051 19:03:11 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:05:02.051 19:03:11 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:02.051 19:03:11 -- common/autotest_common.sh@10 -- # set +x 00:05:02.312 19:03:11 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:05:02.312 19:03:11 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:02.312 19:03:11 -- common/autotest_common.sh@10 -- # set +x 00:05:02.312 19:03:11 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:02.882 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:03.141 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:03.141 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:03.141 19:03:12 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:05:03.141 19:03:12 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:03.141 19:03:12 -- common/autotest_common.sh@10 -- # set +x 00:05:03.141 19:03:12 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:05:03.141 19:03:12 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:05:03.141 19:03:12 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:05:03.141 19:03:12 -- common/autotest_common.sh@1563 -- # bdfs=() 00:05:03.141 19:03:12 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:05:03.141 19:03:12 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:05:03.141 19:03:12 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:05:03.141 19:03:12 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:05:03.141 19:03:12 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:03.141 19:03:12 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:03.141 19:03:12 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:03.402 19:03:12 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:03.402 19:03:12 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:03.402 19:03:12 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:05:03.402 19:03:12 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:05:03.402 19:03:12 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:03.402 19:03:12 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:05:03.402 19:03:12 -- common/autotest_common.sh@1566 -- # device=0x0010 00:05:03.402 19:03:12 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:03.402 19:03:12 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:03.402 19:03:12 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:05:03.402 19:03:12 -- common/autotest_common.sh@1566 -- # device=0x0010 00:05:03.402 19:03:12 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:03.402 19:03:12 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:05:03.402 19:03:12 -- common/autotest_common.sh@1572 -- # return 0 00:05:03.402 19:03:12 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:05:03.402 19:03:12 -- common/autotest_common.sh@1580 -- # return 0 00:05:03.402 19:03:12 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:05:03.402 19:03:12 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:05:03.402 19:03:12 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:03.402 19:03:12 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:03.402 19:03:12 -- spdk/autotest.sh@149 -- # timing_enter lib 00:05:03.402 19:03:12 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:03.402 19:03:12 -- common/autotest_common.sh@10 -- # set +x 00:05:03.402 19:03:12 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:05:03.402 19:03:12 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:03.402 19:03:12 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:03.402 19:03:12 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:03.402 19:03:12 -- common/autotest_common.sh@10 -- # set +x 00:05:03.402 ************************************ 00:05:03.402 START TEST env 00:05:03.402 ************************************ 00:05:03.402 19:03:12 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:03.402 * Looking for test storage... 00:05:03.402 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:05:03.402 19:03:13 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:03.402 19:03:13 env -- common/autotest_common.sh@1693 -- # lcov --version 00:05:03.402 19:03:13 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:03.663 19:03:13 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:03.663 19:03:13 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:03.663 19:03:13 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:03.663 19:03:13 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:03.663 19:03:13 env -- scripts/common.sh@336 -- # IFS=.-: 00:05:03.663 19:03:13 env -- scripts/common.sh@336 -- # read -ra ver1 00:05:03.663 19:03:13 env -- scripts/common.sh@337 -- # IFS=.-: 00:05:03.663 19:03:13 env -- scripts/common.sh@337 -- # read -ra ver2 00:05:03.663 19:03:13 env -- scripts/common.sh@338 -- # local 'op=<' 00:05:03.663 19:03:13 env -- scripts/common.sh@340 -- # ver1_l=2 00:05:03.663 19:03:13 env -- scripts/common.sh@341 -- # ver2_l=1 00:05:03.663 19:03:13 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:03.663 19:03:13 env -- scripts/common.sh@344 -- # case "$op" in 00:05:03.663 19:03:13 env -- scripts/common.sh@345 -- # : 1 00:05:03.663 19:03:13 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:03.663 19:03:13 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:03.663 19:03:13 env -- scripts/common.sh@365 -- # decimal 1 00:05:03.663 19:03:13 env -- scripts/common.sh@353 -- # local d=1 00:05:03.663 19:03:13 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:03.663 19:03:13 env -- scripts/common.sh@355 -- # echo 1 00:05:03.663 19:03:13 env -- scripts/common.sh@365 -- # ver1[v]=1 00:05:03.663 19:03:13 env -- scripts/common.sh@366 -- # decimal 2 00:05:03.663 19:03:13 env -- scripts/common.sh@353 -- # local d=2 00:05:03.663 19:03:13 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:03.663 19:03:13 env -- scripts/common.sh@355 -- # echo 2 00:05:03.663 19:03:13 env -- scripts/common.sh@366 -- # ver2[v]=2 00:05:03.663 19:03:13 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:03.663 19:03:13 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:03.663 19:03:13 env -- scripts/common.sh@368 -- # return 0 00:05:03.663 19:03:13 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:03.663 19:03:13 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:03.663 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.663 --rc genhtml_branch_coverage=1 00:05:03.663 --rc genhtml_function_coverage=1 00:05:03.663 --rc genhtml_legend=1 00:05:03.663 --rc geninfo_all_blocks=1 00:05:03.663 --rc geninfo_unexecuted_blocks=1 00:05:03.663 00:05:03.663 ' 00:05:03.663 19:03:13 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:03.663 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.663 --rc genhtml_branch_coverage=1 00:05:03.663 --rc genhtml_function_coverage=1 00:05:03.663 --rc genhtml_legend=1 00:05:03.663 --rc geninfo_all_blocks=1 00:05:03.663 --rc geninfo_unexecuted_blocks=1 00:05:03.663 00:05:03.663 ' 00:05:03.663 19:03:13 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:03.663 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.663 --rc genhtml_branch_coverage=1 00:05:03.663 --rc genhtml_function_coverage=1 00:05:03.663 --rc genhtml_legend=1 00:05:03.663 --rc geninfo_all_blocks=1 00:05:03.663 --rc geninfo_unexecuted_blocks=1 00:05:03.663 00:05:03.663 ' 00:05:03.663 19:03:13 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:03.663 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.663 --rc genhtml_branch_coverage=1 00:05:03.663 --rc genhtml_function_coverage=1 00:05:03.663 --rc genhtml_legend=1 00:05:03.663 --rc geninfo_all_blocks=1 00:05:03.663 --rc geninfo_unexecuted_blocks=1 00:05:03.663 00:05:03.663 ' 00:05:03.663 19:03:13 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:03.663 19:03:13 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:03.663 19:03:13 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:03.663 19:03:13 env -- common/autotest_common.sh@10 -- # set +x 00:05:03.663 ************************************ 00:05:03.663 START TEST env_memory 00:05:03.663 ************************************ 00:05:03.663 19:03:13 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:03.663 00:05:03.663 00:05:03.663 CUnit - A unit testing framework for C - Version 2.1-3 00:05:03.663 http://cunit.sourceforge.net/ 00:05:03.663 00:05:03.663 00:05:03.663 Suite: memory 00:05:03.663 Test: alloc and free memory map ...[2024-11-27 19:03:13.210824] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:03.663 passed 00:05:03.663 Test: mem map translation ...[2024-11-27 19:03:13.252969] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:03.663 [2024-11-27 19:03:13.253069] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:03.663 [2024-11-27 19:03:13.253126] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:03.663 [2024-11-27 19:03:13.253150] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:03.923 passed 00:05:03.924 Test: mem map registration ...[2024-11-27 19:03:13.318302] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:05:03.924 [2024-11-27 19:03:13.318359] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:05:03.924 passed 00:05:03.924 Test: mem map adjacent registrations ...passed 00:05:03.924 00:05:03.924 Run Summary: Type Total Ran Passed Failed Inactive 00:05:03.924 suites 1 1 n/a 0 0 00:05:03.924 tests 4 4 4 0 0 00:05:03.924 asserts 152 152 152 0 n/a 00:05:03.924 00:05:03.924 Elapsed time = 0.228 seconds 00:05:03.924 00:05:03.924 real 0m0.282s 00:05:03.924 user 0m0.242s 00:05:03.924 sys 0m0.030s 00:05:03.924 19:03:13 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:03.924 ************************************ 00:05:03.924 END TEST env_memory 00:05:03.924 ************************************ 00:05:03.924 19:03:13 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:03.924 19:03:13 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:03.924 19:03:13 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:03.924 19:03:13 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:03.924 19:03:13 env -- common/autotest_common.sh@10 -- # set +x 00:05:03.924 ************************************ 00:05:03.924 START TEST env_vtophys 00:05:03.924 ************************************ 00:05:03.924 19:03:13 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:03.924 EAL: lib.eal log level changed from notice to debug 00:05:03.924 EAL: Detected lcore 0 as core 0 on socket 0 00:05:03.924 EAL: Detected lcore 1 as core 0 on socket 0 00:05:03.924 EAL: Detected lcore 2 as core 0 on socket 0 00:05:03.924 EAL: Detected lcore 3 as core 0 on socket 0 00:05:03.924 EAL: Detected lcore 4 as core 0 on socket 0 00:05:03.924 EAL: Detected lcore 5 as core 0 on socket 0 00:05:03.924 EAL: Detected lcore 6 as core 0 on socket 0 00:05:03.924 EAL: Detected lcore 7 as core 0 on socket 0 00:05:03.924 EAL: Detected lcore 8 as core 0 on socket 0 00:05:03.924 EAL: Detected lcore 9 as core 0 on socket 0 00:05:03.924 EAL: Maximum logical cores by configuration: 128 00:05:03.924 EAL: Detected CPU lcores: 10 00:05:03.924 EAL: Detected NUMA nodes: 1 00:05:03.924 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:05:03.924 EAL: Detected shared linkage of DPDK 00:05:04.184 EAL: No shared files mode enabled, IPC will be disabled 00:05:04.184 EAL: Selected IOVA mode 'PA' 00:05:04.184 EAL: Probing VFIO support... 00:05:04.184 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:04.184 EAL: VFIO modules not loaded, skipping VFIO support... 00:05:04.184 EAL: Ask a virtual area of 0x2e000 bytes 00:05:04.184 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:04.184 EAL: Setting up physically contiguous memory... 00:05:04.184 EAL: Setting maximum number of open files to 524288 00:05:04.184 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:04.184 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:04.184 EAL: Ask a virtual area of 0x61000 bytes 00:05:04.184 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:04.184 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:04.184 EAL: Ask a virtual area of 0x400000000 bytes 00:05:04.184 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:04.184 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:04.184 EAL: Ask a virtual area of 0x61000 bytes 00:05:04.184 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:04.184 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:04.184 EAL: Ask a virtual area of 0x400000000 bytes 00:05:04.184 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:04.184 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:04.184 EAL: Ask a virtual area of 0x61000 bytes 00:05:04.184 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:04.184 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:04.184 EAL: Ask a virtual area of 0x400000000 bytes 00:05:04.184 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:04.184 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:04.184 EAL: Ask a virtual area of 0x61000 bytes 00:05:04.184 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:04.184 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:04.184 EAL: Ask a virtual area of 0x400000000 bytes 00:05:04.184 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:04.184 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:04.184 EAL: Hugepages will be freed exactly as allocated. 00:05:04.184 EAL: No shared files mode enabled, IPC is disabled 00:05:04.184 EAL: No shared files mode enabled, IPC is disabled 00:05:04.184 EAL: TSC frequency is ~2290000 KHz 00:05:04.184 EAL: Main lcore 0 is ready (tid=7f3101936a40;cpuset=[0]) 00:05:04.184 EAL: Trying to obtain current memory policy. 00:05:04.184 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:04.184 EAL: Restoring previous memory policy: 0 00:05:04.184 EAL: request: mp_malloc_sync 00:05:04.184 EAL: No shared files mode enabled, IPC is disabled 00:05:04.184 EAL: Heap on socket 0 was expanded by 2MB 00:05:04.184 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:04.184 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:04.184 EAL: Mem event callback 'spdk:(nil)' registered 00:05:04.184 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:05:04.184 00:05:04.184 00:05:04.184 CUnit - A unit testing framework for C - Version 2.1-3 00:05:04.184 http://cunit.sourceforge.net/ 00:05:04.184 00:05:04.184 00:05:04.184 Suite: components_suite 00:05:04.752 Test: vtophys_malloc_test ...passed 00:05:04.752 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:04.752 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:04.752 EAL: Restoring previous memory policy: 4 00:05:04.752 EAL: Calling mem event callback 'spdk:(nil)' 00:05:04.752 EAL: request: mp_malloc_sync 00:05:04.752 EAL: No shared files mode enabled, IPC is disabled 00:05:04.752 EAL: Heap on socket 0 was expanded by 4MB 00:05:04.752 EAL: Calling mem event callback 'spdk:(nil)' 00:05:04.752 EAL: request: mp_malloc_sync 00:05:04.752 EAL: No shared files mode enabled, IPC is disabled 00:05:04.752 EAL: Heap on socket 0 was shrunk by 4MB 00:05:04.752 EAL: Trying to obtain current memory policy. 00:05:04.752 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:04.752 EAL: Restoring previous memory policy: 4 00:05:04.752 EAL: Calling mem event callback 'spdk:(nil)' 00:05:04.752 EAL: request: mp_malloc_sync 00:05:04.752 EAL: No shared files mode enabled, IPC is disabled 00:05:04.752 EAL: Heap on socket 0 was expanded by 6MB 00:05:04.752 EAL: Calling mem event callback 'spdk:(nil)' 00:05:04.752 EAL: request: mp_malloc_sync 00:05:04.752 EAL: No shared files mode enabled, IPC is disabled 00:05:04.752 EAL: Heap on socket 0 was shrunk by 6MB 00:05:04.752 EAL: Trying to obtain current memory policy. 00:05:04.752 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:04.752 EAL: Restoring previous memory policy: 4 00:05:04.752 EAL: Calling mem event callback 'spdk:(nil)' 00:05:04.752 EAL: request: mp_malloc_sync 00:05:04.752 EAL: No shared files mode enabled, IPC is disabled 00:05:04.752 EAL: Heap on socket 0 was expanded by 10MB 00:05:04.752 EAL: Calling mem event callback 'spdk:(nil)' 00:05:04.752 EAL: request: mp_malloc_sync 00:05:04.752 EAL: No shared files mode enabled, IPC is disabled 00:05:04.752 EAL: Heap on socket 0 was shrunk by 10MB 00:05:04.752 EAL: Trying to obtain current memory policy. 00:05:04.752 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:04.752 EAL: Restoring previous memory policy: 4 00:05:04.752 EAL: Calling mem event callback 'spdk:(nil)' 00:05:04.752 EAL: request: mp_malloc_sync 00:05:04.752 EAL: No shared files mode enabled, IPC is disabled 00:05:04.752 EAL: Heap on socket 0 was expanded by 18MB 00:05:04.752 EAL: Calling mem event callback 'spdk:(nil)' 00:05:04.752 EAL: request: mp_malloc_sync 00:05:04.752 EAL: No shared files mode enabled, IPC is disabled 00:05:04.752 EAL: Heap on socket 0 was shrunk by 18MB 00:05:04.752 EAL: Trying to obtain current memory policy. 00:05:04.752 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:04.752 EAL: Restoring previous memory policy: 4 00:05:04.752 EAL: Calling mem event callback 'spdk:(nil)' 00:05:04.752 EAL: request: mp_malloc_sync 00:05:04.752 EAL: No shared files mode enabled, IPC is disabled 00:05:04.752 EAL: Heap on socket 0 was expanded by 34MB 00:05:04.752 EAL: Calling mem event callback 'spdk:(nil)' 00:05:04.752 EAL: request: mp_malloc_sync 00:05:04.752 EAL: No shared files mode enabled, IPC is disabled 00:05:04.752 EAL: Heap on socket 0 was shrunk by 34MB 00:05:05.010 EAL: Trying to obtain current memory policy. 00:05:05.010 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:05.010 EAL: Restoring previous memory policy: 4 00:05:05.010 EAL: Calling mem event callback 'spdk:(nil)' 00:05:05.010 EAL: request: mp_malloc_sync 00:05:05.010 EAL: No shared files mode enabled, IPC is disabled 00:05:05.010 EAL: Heap on socket 0 was expanded by 66MB 00:05:05.010 EAL: Calling mem event callback 'spdk:(nil)' 00:05:05.010 EAL: request: mp_malloc_sync 00:05:05.010 EAL: No shared files mode enabled, IPC is disabled 00:05:05.010 EAL: Heap on socket 0 was shrunk by 66MB 00:05:05.269 EAL: Trying to obtain current memory policy. 00:05:05.269 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:05.269 EAL: Restoring previous memory policy: 4 00:05:05.269 EAL: Calling mem event callback 'spdk:(nil)' 00:05:05.269 EAL: request: mp_malloc_sync 00:05:05.269 EAL: No shared files mode enabled, IPC is disabled 00:05:05.269 EAL: Heap on socket 0 was expanded by 130MB 00:05:05.533 EAL: Calling mem event callback 'spdk:(nil)' 00:05:05.533 EAL: request: mp_malloc_sync 00:05:05.533 EAL: No shared files mode enabled, IPC is disabled 00:05:05.533 EAL: Heap on socket 0 was shrunk by 130MB 00:05:05.804 EAL: Trying to obtain current memory policy. 00:05:05.804 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:05.804 EAL: Restoring previous memory policy: 4 00:05:05.804 EAL: Calling mem event callback 'spdk:(nil)' 00:05:05.804 EAL: request: mp_malloc_sync 00:05:05.804 EAL: No shared files mode enabled, IPC is disabled 00:05:05.804 EAL: Heap on socket 0 was expanded by 258MB 00:05:06.372 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.372 EAL: request: mp_malloc_sync 00:05:06.372 EAL: No shared files mode enabled, IPC is disabled 00:05:06.372 EAL: Heap on socket 0 was shrunk by 258MB 00:05:06.938 EAL: Trying to obtain current memory policy. 00:05:06.938 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:06.938 EAL: Restoring previous memory policy: 4 00:05:06.938 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.938 EAL: request: mp_malloc_sync 00:05:06.938 EAL: No shared files mode enabled, IPC is disabled 00:05:06.938 EAL: Heap on socket 0 was expanded by 514MB 00:05:07.875 EAL: Calling mem event callback 'spdk:(nil)' 00:05:08.133 EAL: request: mp_malloc_sync 00:05:08.133 EAL: No shared files mode enabled, IPC is disabled 00:05:08.133 EAL: Heap on socket 0 was shrunk by 514MB 00:05:09.070 EAL: Trying to obtain current memory policy. 00:05:09.070 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:09.329 EAL: Restoring previous memory policy: 4 00:05:09.329 EAL: Calling mem event callback 'spdk:(nil)' 00:05:09.329 EAL: request: mp_malloc_sync 00:05:09.329 EAL: No shared files mode enabled, IPC is disabled 00:05:09.329 EAL: Heap on socket 0 was expanded by 1026MB 00:05:11.234 EAL: Calling mem event callback 'spdk:(nil)' 00:05:11.492 EAL: request: mp_malloc_sync 00:05:11.492 EAL: No shared files mode enabled, IPC is disabled 00:05:11.492 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:13.396 passed 00:05:13.396 00:05:13.396 Run Summary: Type Total Ran Passed Failed Inactive 00:05:13.396 suites 1 1 n/a 0 0 00:05:13.396 tests 2 2 2 0 0 00:05:13.396 asserts 5789 5789 5789 0 n/a 00:05:13.396 00:05:13.396 Elapsed time = 8.806 seconds 00:05:13.396 EAL: Calling mem event callback 'spdk:(nil)' 00:05:13.396 EAL: request: mp_malloc_sync 00:05:13.396 EAL: No shared files mode enabled, IPC is disabled 00:05:13.396 EAL: Heap on socket 0 was shrunk by 2MB 00:05:13.396 EAL: No shared files mode enabled, IPC is disabled 00:05:13.396 EAL: No shared files mode enabled, IPC is disabled 00:05:13.396 EAL: No shared files mode enabled, IPC is disabled 00:05:13.396 00:05:13.396 real 0m9.149s 00:05:13.396 user 0m7.691s 00:05:13.396 sys 0m1.294s 00:05:13.396 19:03:22 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:13.396 ************************************ 00:05:13.396 END TEST env_vtophys 00:05:13.396 ************************************ 00:05:13.396 19:03:22 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:13.396 19:03:22 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:13.396 19:03:22 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:13.396 19:03:22 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:13.396 19:03:22 env -- common/autotest_common.sh@10 -- # set +x 00:05:13.396 ************************************ 00:05:13.396 START TEST env_pci 00:05:13.396 ************************************ 00:05:13.396 19:03:22 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:13.397 00:05:13.397 00:05:13.397 CUnit - A unit testing framework for C - Version 2.1-3 00:05:13.397 http://cunit.sourceforge.net/ 00:05:13.397 00:05:13.397 00:05:13.397 Suite: pci 00:05:13.397 Test: pci_hook ...[2024-11-27 19:03:22.753043] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 56817 has claimed it 00:05:13.397 passed 00:05:13.397 00:05:13.397 Run Summary: Type Total Ran Passed Failed Inactive 00:05:13.397 suites 1 1 n/a 0 0 00:05:13.397 tests 1 1 1 0 0 00:05:13.397 asserts 25 25 25 0 n/a 00:05:13.397 00:05:13.397 Elapsed time = 0.005 seconds 00:05:13.397 EAL: Cannot find device (10000:00:01.0) 00:05:13.397 EAL: Failed to attach device on primary process 00:05:13.397 00:05:13.397 real 0m0.104s 00:05:13.397 user 0m0.053s 00:05:13.397 sys 0m0.050s 00:05:13.397 19:03:22 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:13.397 ************************************ 00:05:13.397 END TEST env_pci 00:05:13.397 ************************************ 00:05:13.397 19:03:22 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:13.397 19:03:22 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:13.397 19:03:22 env -- env/env.sh@15 -- # uname 00:05:13.397 19:03:22 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:13.397 19:03:22 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:13.397 19:03:22 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:13.397 19:03:22 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:05:13.397 19:03:22 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:13.397 19:03:22 env -- common/autotest_common.sh@10 -- # set +x 00:05:13.397 ************************************ 00:05:13.397 START TEST env_dpdk_post_init 00:05:13.397 ************************************ 00:05:13.397 19:03:22 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:13.397 EAL: Detected CPU lcores: 10 00:05:13.397 EAL: Detected NUMA nodes: 1 00:05:13.397 EAL: Detected shared linkage of DPDK 00:05:13.397 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:13.397 EAL: Selected IOVA mode 'PA' 00:05:13.658 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:13.658 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:05:13.658 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:05:13.658 Starting DPDK initialization... 00:05:13.658 Starting SPDK post initialization... 00:05:13.658 SPDK NVMe probe 00:05:13.658 Attaching to 0000:00:10.0 00:05:13.658 Attaching to 0000:00:11.0 00:05:13.658 Attached to 0000:00:10.0 00:05:13.658 Attached to 0000:00:11.0 00:05:13.658 Cleaning up... 00:05:13.658 00:05:13.658 real 0m0.306s 00:05:13.658 user 0m0.095s 00:05:13.658 sys 0m0.112s 00:05:13.658 19:03:23 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:13.658 19:03:23 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:13.658 ************************************ 00:05:13.658 END TEST env_dpdk_post_init 00:05:13.658 ************************************ 00:05:13.658 19:03:23 env -- env/env.sh@26 -- # uname 00:05:13.658 19:03:23 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:13.658 19:03:23 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:13.658 19:03:23 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:13.658 19:03:23 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:13.658 19:03:23 env -- common/autotest_common.sh@10 -- # set +x 00:05:13.658 ************************************ 00:05:13.658 START TEST env_mem_callbacks 00:05:13.658 ************************************ 00:05:13.658 19:03:23 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:13.917 EAL: Detected CPU lcores: 10 00:05:13.917 EAL: Detected NUMA nodes: 1 00:05:13.917 EAL: Detected shared linkage of DPDK 00:05:13.917 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:13.917 EAL: Selected IOVA mode 'PA' 00:05:13.917 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:13.917 00:05:13.917 00:05:13.917 CUnit - A unit testing framework for C - Version 2.1-3 00:05:13.917 http://cunit.sourceforge.net/ 00:05:13.917 00:05:13.917 00:05:13.917 Suite: memory 00:05:13.917 Test: test ... 00:05:13.917 register 0x200000200000 2097152 00:05:13.917 malloc 3145728 00:05:13.917 register 0x200000400000 4194304 00:05:13.917 buf 0x2000004fffc0 len 3145728 PASSED 00:05:13.917 malloc 64 00:05:13.917 buf 0x2000004ffec0 len 64 PASSED 00:05:13.917 malloc 4194304 00:05:13.917 register 0x200000800000 6291456 00:05:13.917 buf 0x2000009fffc0 len 4194304 PASSED 00:05:13.917 free 0x2000004fffc0 3145728 00:05:13.917 free 0x2000004ffec0 64 00:05:13.917 unregister 0x200000400000 4194304 PASSED 00:05:13.917 free 0x2000009fffc0 4194304 00:05:13.917 unregister 0x200000800000 6291456 PASSED 00:05:13.917 malloc 8388608 00:05:13.917 register 0x200000400000 10485760 00:05:13.917 buf 0x2000005fffc0 len 8388608 PASSED 00:05:13.917 free 0x2000005fffc0 8388608 00:05:13.917 unregister 0x200000400000 10485760 PASSED 00:05:13.917 passed 00:05:13.917 00:05:13.917 Run Summary: Type Total Ran Passed Failed Inactive 00:05:13.917 suites 1 1 n/a 0 0 00:05:13.917 tests 1 1 1 0 0 00:05:13.917 asserts 15 15 15 0 n/a 00:05:13.917 00:05:13.917 Elapsed time = 0.082 seconds 00:05:14.177 00:05:14.177 real 0m0.292s 00:05:14.177 user 0m0.115s 00:05:14.177 sys 0m0.073s 00:05:14.177 ************************************ 00:05:14.177 END TEST env_mem_callbacks 00:05:14.177 ************************************ 00:05:14.177 19:03:23 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:14.177 19:03:23 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:14.177 ************************************ 00:05:14.177 END TEST env 00:05:14.177 ************************************ 00:05:14.177 00:05:14.177 real 0m10.727s 00:05:14.177 user 0m8.409s 00:05:14.177 sys 0m1.956s 00:05:14.177 19:03:23 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:14.177 19:03:23 env -- common/autotest_common.sh@10 -- # set +x 00:05:14.177 19:03:23 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:14.177 19:03:23 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:14.177 19:03:23 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:14.177 19:03:23 -- common/autotest_common.sh@10 -- # set +x 00:05:14.177 ************************************ 00:05:14.177 START TEST rpc 00:05:14.177 ************************************ 00:05:14.177 19:03:23 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:14.177 * Looking for test storage... 00:05:14.437 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:14.437 19:03:23 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:14.437 19:03:23 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:14.437 19:03:23 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:14.437 19:03:23 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:14.437 19:03:23 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:14.437 19:03:23 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:14.437 19:03:23 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:14.437 19:03:23 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:14.437 19:03:23 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:14.437 19:03:23 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:14.437 19:03:23 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:14.437 19:03:23 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:14.437 19:03:23 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:14.437 19:03:23 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:14.437 19:03:23 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:14.437 19:03:23 rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:14.437 19:03:23 rpc -- scripts/common.sh@345 -- # : 1 00:05:14.437 19:03:23 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:14.437 19:03:23 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:14.437 19:03:23 rpc -- scripts/common.sh@365 -- # decimal 1 00:05:14.437 19:03:23 rpc -- scripts/common.sh@353 -- # local d=1 00:05:14.437 19:03:23 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:14.437 19:03:23 rpc -- scripts/common.sh@355 -- # echo 1 00:05:14.437 19:03:23 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:14.437 19:03:23 rpc -- scripts/common.sh@366 -- # decimal 2 00:05:14.437 19:03:23 rpc -- scripts/common.sh@353 -- # local d=2 00:05:14.437 19:03:23 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:14.437 19:03:23 rpc -- scripts/common.sh@355 -- # echo 2 00:05:14.437 19:03:23 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:14.437 19:03:23 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:14.437 19:03:23 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:14.437 19:03:23 rpc -- scripts/common.sh@368 -- # return 0 00:05:14.437 19:03:23 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:14.437 19:03:23 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:14.437 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.437 --rc genhtml_branch_coverage=1 00:05:14.437 --rc genhtml_function_coverage=1 00:05:14.437 --rc genhtml_legend=1 00:05:14.437 --rc geninfo_all_blocks=1 00:05:14.437 --rc geninfo_unexecuted_blocks=1 00:05:14.437 00:05:14.437 ' 00:05:14.437 19:03:23 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:14.437 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.437 --rc genhtml_branch_coverage=1 00:05:14.437 --rc genhtml_function_coverage=1 00:05:14.437 --rc genhtml_legend=1 00:05:14.437 --rc geninfo_all_blocks=1 00:05:14.437 --rc geninfo_unexecuted_blocks=1 00:05:14.437 00:05:14.437 ' 00:05:14.437 19:03:23 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:14.437 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.437 --rc genhtml_branch_coverage=1 00:05:14.437 --rc genhtml_function_coverage=1 00:05:14.437 --rc genhtml_legend=1 00:05:14.437 --rc geninfo_all_blocks=1 00:05:14.437 --rc geninfo_unexecuted_blocks=1 00:05:14.437 00:05:14.437 ' 00:05:14.437 19:03:23 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:14.437 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.437 --rc genhtml_branch_coverage=1 00:05:14.437 --rc genhtml_function_coverage=1 00:05:14.437 --rc genhtml_legend=1 00:05:14.437 --rc geninfo_all_blocks=1 00:05:14.437 --rc geninfo_unexecuted_blocks=1 00:05:14.437 00:05:14.437 ' 00:05:14.437 19:03:23 rpc -- rpc/rpc.sh@65 -- # spdk_pid=56944 00:05:14.437 19:03:23 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:14.437 19:03:23 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:14.437 19:03:23 rpc -- rpc/rpc.sh@67 -- # waitforlisten 56944 00:05:14.437 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:14.437 19:03:23 rpc -- common/autotest_common.sh@835 -- # '[' -z 56944 ']' 00:05:14.437 19:03:23 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:14.437 19:03:23 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:14.437 19:03:23 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:14.437 19:03:23 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:14.437 19:03:23 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:14.437 [2024-11-27 19:03:24.037912] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:05:14.438 [2024-11-27 19:03:24.038044] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56944 ] 00:05:14.696 [2024-11-27 19:03:24.213548] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:14.955 [2024-11-27 19:03:24.351323] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:14.955 [2024-11-27 19:03:24.351400] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 56944' to capture a snapshot of events at runtime. 00:05:14.955 [2024-11-27 19:03:24.351411] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:14.955 [2024-11-27 19:03:24.351438] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:14.955 [2024-11-27 19:03:24.351447] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid56944 for offline analysis/debug. 00:05:14.955 [2024-11-27 19:03:24.352749] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.891 19:03:25 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:15.891 19:03:25 rpc -- common/autotest_common.sh@868 -- # return 0 00:05:15.891 19:03:25 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:15.891 19:03:25 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:15.891 19:03:25 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:15.891 19:03:25 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:15.891 19:03:25 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:15.891 19:03:25 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:15.892 19:03:25 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:15.892 ************************************ 00:05:15.892 START TEST rpc_integrity 00:05:15.892 ************************************ 00:05:15.892 19:03:25 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:15.892 19:03:25 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:15.892 19:03:25 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:15.892 19:03:25 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:15.892 19:03:25 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:15.892 19:03:25 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:15.892 19:03:25 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:15.892 19:03:25 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:15.892 19:03:25 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:15.892 19:03:25 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:15.892 19:03:25 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:15.892 19:03:25 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:15.892 19:03:25 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:15.892 19:03:25 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:15.892 19:03:25 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:15.892 19:03:25 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:15.892 19:03:25 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:15.892 19:03:25 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:15.892 { 00:05:15.892 "name": "Malloc0", 00:05:15.892 "aliases": [ 00:05:15.892 "4c4363ba-cfd6-43b1-9310-fbf6b771e377" 00:05:15.892 ], 00:05:15.892 "product_name": "Malloc disk", 00:05:15.892 "block_size": 512, 00:05:15.892 "num_blocks": 16384, 00:05:15.892 "uuid": "4c4363ba-cfd6-43b1-9310-fbf6b771e377", 00:05:15.892 "assigned_rate_limits": { 00:05:15.892 "rw_ios_per_sec": 0, 00:05:15.892 "rw_mbytes_per_sec": 0, 00:05:15.892 "r_mbytes_per_sec": 0, 00:05:15.892 "w_mbytes_per_sec": 0 00:05:15.892 }, 00:05:15.892 "claimed": false, 00:05:15.892 "zoned": false, 00:05:15.892 "supported_io_types": { 00:05:15.892 "read": true, 00:05:15.892 "write": true, 00:05:15.892 "unmap": true, 00:05:15.892 "flush": true, 00:05:15.892 "reset": true, 00:05:15.892 "nvme_admin": false, 00:05:15.892 "nvme_io": false, 00:05:15.892 "nvme_io_md": false, 00:05:15.892 "write_zeroes": true, 00:05:15.892 "zcopy": true, 00:05:15.892 "get_zone_info": false, 00:05:15.892 "zone_management": false, 00:05:15.892 "zone_append": false, 00:05:15.892 "compare": false, 00:05:15.892 "compare_and_write": false, 00:05:15.892 "abort": true, 00:05:15.892 "seek_hole": false, 00:05:15.892 "seek_data": false, 00:05:15.892 "copy": true, 00:05:15.892 "nvme_iov_md": false 00:05:15.892 }, 00:05:15.892 "memory_domains": [ 00:05:15.892 { 00:05:15.892 "dma_device_id": "system", 00:05:15.892 "dma_device_type": 1 00:05:15.892 }, 00:05:15.892 { 00:05:15.892 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:15.892 "dma_device_type": 2 00:05:15.892 } 00:05:15.892 ], 00:05:15.892 "driver_specific": {} 00:05:15.892 } 00:05:15.892 ]' 00:05:15.892 19:03:25 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:15.892 19:03:25 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:15.892 19:03:25 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:15.892 19:03:25 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:15.892 19:03:25 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:16.156 [2024-11-27 19:03:25.533067] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:16.156 [2024-11-27 19:03:25.533194] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:16.156 [2024-11-27 19:03:25.533238] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:05:16.156 [2024-11-27 19:03:25.533279] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:16.156 [2024-11-27 19:03:25.535840] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:16.156 [2024-11-27 19:03:25.535920] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:16.156 Passthru0 00:05:16.156 19:03:25 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:16.156 19:03:25 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:16.156 19:03:25 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:16.156 19:03:25 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:16.156 19:03:25 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:16.156 19:03:25 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:16.156 { 00:05:16.156 "name": "Malloc0", 00:05:16.156 "aliases": [ 00:05:16.156 "4c4363ba-cfd6-43b1-9310-fbf6b771e377" 00:05:16.156 ], 00:05:16.156 "product_name": "Malloc disk", 00:05:16.156 "block_size": 512, 00:05:16.156 "num_blocks": 16384, 00:05:16.156 "uuid": "4c4363ba-cfd6-43b1-9310-fbf6b771e377", 00:05:16.156 "assigned_rate_limits": { 00:05:16.156 "rw_ios_per_sec": 0, 00:05:16.156 "rw_mbytes_per_sec": 0, 00:05:16.156 "r_mbytes_per_sec": 0, 00:05:16.156 "w_mbytes_per_sec": 0 00:05:16.156 }, 00:05:16.156 "claimed": true, 00:05:16.156 "claim_type": "exclusive_write", 00:05:16.156 "zoned": false, 00:05:16.156 "supported_io_types": { 00:05:16.156 "read": true, 00:05:16.156 "write": true, 00:05:16.156 "unmap": true, 00:05:16.156 "flush": true, 00:05:16.156 "reset": true, 00:05:16.156 "nvme_admin": false, 00:05:16.156 "nvme_io": false, 00:05:16.156 "nvme_io_md": false, 00:05:16.156 "write_zeroes": true, 00:05:16.156 "zcopy": true, 00:05:16.156 "get_zone_info": false, 00:05:16.156 "zone_management": false, 00:05:16.156 "zone_append": false, 00:05:16.156 "compare": false, 00:05:16.156 "compare_and_write": false, 00:05:16.156 "abort": true, 00:05:16.156 "seek_hole": false, 00:05:16.156 "seek_data": false, 00:05:16.156 "copy": true, 00:05:16.156 "nvme_iov_md": false 00:05:16.156 }, 00:05:16.156 "memory_domains": [ 00:05:16.156 { 00:05:16.156 "dma_device_id": "system", 00:05:16.156 "dma_device_type": 1 00:05:16.156 }, 00:05:16.156 { 00:05:16.156 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:16.156 "dma_device_type": 2 00:05:16.156 } 00:05:16.156 ], 00:05:16.156 "driver_specific": {} 00:05:16.156 }, 00:05:16.156 { 00:05:16.156 "name": "Passthru0", 00:05:16.156 "aliases": [ 00:05:16.156 "3a575fd4-0bff-535f-8295-12104de1b8a5" 00:05:16.156 ], 00:05:16.156 "product_name": "passthru", 00:05:16.156 "block_size": 512, 00:05:16.156 "num_blocks": 16384, 00:05:16.156 "uuid": "3a575fd4-0bff-535f-8295-12104de1b8a5", 00:05:16.156 "assigned_rate_limits": { 00:05:16.156 "rw_ios_per_sec": 0, 00:05:16.156 "rw_mbytes_per_sec": 0, 00:05:16.156 "r_mbytes_per_sec": 0, 00:05:16.156 "w_mbytes_per_sec": 0 00:05:16.156 }, 00:05:16.156 "claimed": false, 00:05:16.156 "zoned": false, 00:05:16.156 "supported_io_types": { 00:05:16.156 "read": true, 00:05:16.156 "write": true, 00:05:16.156 "unmap": true, 00:05:16.156 "flush": true, 00:05:16.156 "reset": true, 00:05:16.156 "nvme_admin": false, 00:05:16.156 "nvme_io": false, 00:05:16.156 "nvme_io_md": false, 00:05:16.156 "write_zeroes": true, 00:05:16.156 "zcopy": true, 00:05:16.156 "get_zone_info": false, 00:05:16.156 "zone_management": false, 00:05:16.156 "zone_append": false, 00:05:16.156 "compare": false, 00:05:16.156 "compare_and_write": false, 00:05:16.156 "abort": true, 00:05:16.156 "seek_hole": false, 00:05:16.156 "seek_data": false, 00:05:16.156 "copy": true, 00:05:16.156 "nvme_iov_md": false 00:05:16.156 }, 00:05:16.156 "memory_domains": [ 00:05:16.156 { 00:05:16.156 "dma_device_id": "system", 00:05:16.156 "dma_device_type": 1 00:05:16.156 }, 00:05:16.156 { 00:05:16.156 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:16.156 "dma_device_type": 2 00:05:16.156 } 00:05:16.156 ], 00:05:16.156 "driver_specific": { 00:05:16.156 "passthru": { 00:05:16.156 "name": "Passthru0", 00:05:16.156 "base_bdev_name": "Malloc0" 00:05:16.156 } 00:05:16.156 } 00:05:16.156 } 00:05:16.156 ]' 00:05:16.156 19:03:25 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:16.156 19:03:25 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:16.156 19:03:25 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:16.156 19:03:25 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:16.156 19:03:25 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:16.156 19:03:25 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:16.156 19:03:25 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:16.156 19:03:25 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:16.156 19:03:25 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:16.156 19:03:25 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:16.156 19:03:25 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:16.156 19:03:25 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:16.156 19:03:25 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:16.156 19:03:25 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:16.156 19:03:25 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:16.156 19:03:25 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:16.156 ************************************ 00:05:16.156 END TEST rpc_integrity 00:05:16.156 ************************************ 00:05:16.156 19:03:25 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:16.156 00:05:16.156 real 0m0.353s 00:05:16.157 user 0m0.191s 00:05:16.157 sys 0m0.054s 00:05:16.157 19:03:25 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:16.157 19:03:25 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:16.157 19:03:25 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:16.157 19:03:25 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:16.157 19:03:25 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:16.157 19:03:25 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:16.425 ************************************ 00:05:16.425 START TEST rpc_plugins 00:05:16.425 ************************************ 00:05:16.425 19:03:25 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:05:16.425 19:03:25 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:16.425 19:03:25 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:16.425 19:03:25 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:16.425 19:03:25 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:16.425 19:03:25 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:16.425 19:03:25 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:16.425 19:03:25 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:16.425 19:03:25 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:16.425 19:03:25 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:16.425 19:03:25 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:16.425 { 00:05:16.425 "name": "Malloc1", 00:05:16.425 "aliases": [ 00:05:16.425 "89a32bf4-f79e-4dd5-bfe0-7d733c610677" 00:05:16.425 ], 00:05:16.425 "product_name": "Malloc disk", 00:05:16.425 "block_size": 4096, 00:05:16.425 "num_blocks": 256, 00:05:16.425 "uuid": "89a32bf4-f79e-4dd5-bfe0-7d733c610677", 00:05:16.425 "assigned_rate_limits": { 00:05:16.425 "rw_ios_per_sec": 0, 00:05:16.425 "rw_mbytes_per_sec": 0, 00:05:16.425 "r_mbytes_per_sec": 0, 00:05:16.425 "w_mbytes_per_sec": 0 00:05:16.425 }, 00:05:16.425 "claimed": false, 00:05:16.425 "zoned": false, 00:05:16.425 "supported_io_types": { 00:05:16.425 "read": true, 00:05:16.425 "write": true, 00:05:16.425 "unmap": true, 00:05:16.425 "flush": true, 00:05:16.425 "reset": true, 00:05:16.425 "nvme_admin": false, 00:05:16.425 "nvme_io": false, 00:05:16.425 "nvme_io_md": false, 00:05:16.425 "write_zeroes": true, 00:05:16.425 "zcopy": true, 00:05:16.425 "get_zone_info": false, 00:05:16.425 "zone_management": false, 00:05:16.425 "zone_append": false, 00:05:16.425 "compare": false, 00:05:16.425 "compare_and_write": false, 00:05:16.425 "abort": true, 00:05:16.425 "seek_hole": false, 00:05:16.425 "seek_data": false, 00:05:16.425 "copy": true, 00:05:16.425 "nvme_iov_md": false 00:05:16.425 }, 00:05:16.425 "memory_domains": [ 00:05:16.425 { 00:05:16.425 "dma_device_id": "system", 00:05:16.425 "dma_device_type": 1 00:05:16.425 }, 00:05:16.425 { 00:05:16.425 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:16.425 "dma_device_type": 2 00:05:16.425 } 00:05:16.425 ], 00:05:16.425 "driver_specific": {} 00:05:16.425 } 00:05:16.425 ]' 00:05:16.425 19:03:25 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:16.425 19:03:25 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:16.425 19:03:25 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:16.425 19:03:25 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:16.425 19:03:25 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:16.425 19:03:25 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:16.425 19:03:25 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:16.425 19:03:25 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:16.425 19:03:25 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:16.425 19:03:25 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:16.425 19:03:25 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:16.425 19:03:25 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:16.425 ************************************ 00:05:16.425 END TEST rpc_plugins 00:05:16.425 ************************************ 00:05:16.425 19:03:25 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:16.425 00:05:16.425 real 0m0.179s 00:05:16.425 user 0m0.104s 00:05:16.425 sys 0m0.028s 00:05:16.425 19:03:25 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:16.425 19:03:25 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:16.425 19:03:26 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:16.425 19:03:26 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:16.425 19:03:26 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:16.425 19:03:26 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:16.425 ************************************ 00:05:16.425 START TEST rpc_trace_cmd_test 00:05:16.425 ************************************ 00:05:16.425 19:03:26 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:05:16.425 19:03:26 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:16.425 19:03:26 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:16.425 19:03:26 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:16.425 19:03:26 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:16.685 19:03:26 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:16.685 19:03:26 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:16.685 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid56944", 00:05:16.685 "tpoint_group_mask": "0x8", 00:05:16.685 "iscsi_conn": { 00:05:16.685 "mask": "0x2", 00:05:16.685 "tpoint_mask": "0x0" 00:05:16.685 }, 00:05:16.685 "scsi": { 00:05:16.685 "mask": "0x4", 00:05:16.685 "tpoint_mask": "0x0" 00:05:16.685 }, 00:05:16.685 "bdev": { 00:05:16.685 "mask": "0x8", 00:05:16.685 "tpoint_mask": "0xffffffffffffffff" 00:05:16.685 }, 00:05:16.685 "nvmf_rdma": { 00:05:16.685 "mask": "0x10", 00:05:16.685 "tpoint_mask": "0x0" 00:05:16.685 }, 00:05:16.685 "nvmf_tcp": { 00:05:16.685 "mask": "0x20", 00:05:16.685 "tpoint_mask": "0x0" 00:05:16.685 }, 00:05:16.685 "ftl": { 00:05:16.685 "mask": "0x40", 00:05:16.685 "tpoint_mask": "0x0" 00:05:16.685 }, 00:05:16.685 "blobfs": { 00:05:16.685 "mask": "0x80", 00:05:16.685 "tpoint_mask": "0x0" 00:05:16.685 }, 00:05:16.685 "dsa": { 00:05:16.685 "mask": "0x200", 00:05:16.685 "tpoint_mask": "0x0" 00:05:16.685 }, 00:05:16.685 "thread": { 00:05:16.685 "mask": "0x400", 00:05:16.685 "tpoint_mask": "0x0" 00:05:16.685 }, 00:05:16.685 "nvme_pcie": { 00:05:16.685 "mask": "0x800", 00:05:16.685 "tpoint_mask": "0x0" 00:05:16.685 }, 00:05:16.685 "iaa": { 00:05:16.685 "mask": "0x1000", 00:05:16.685 "tpoint_mask": "0x0" 00:05:16.685 }, 00:05:16.685 "nvme_tcp": { 00:05:16.685 "mask": "0x2000", 00:05:16.685 "tpoint_mask": "0x0" 00:05:16.685 }, 00:05:16.685 "bdev_nvme": { 00:05:16.685 "mask": "0x4000", 00:05:16.685 "tpoint_mask": "0x0" 00:05:16.685 }, 00:05:16.685 "sock": { 00:05:16.685 "mask": "0x8000", 00:05:16.685 "tpoint_mask": "0x0" 00:05:16.685 }, 00:05:16.685 "blob": { 00:05:16.685 "mask": "0x10000", 00:05:16.685 "tpoint_mask": "0x0" 00:05:16.685 }, 00:05:16.685 "bdev_raid": { 00:05:16.685 "mask": "0x20000", 00:05:16.685 "tpoint_mask": "0x0" 00:05:16.685 }, 00:05:16.685 "scheduler": { 00:05:16.685 "mask": "0x40000", 00:05:16.685 "tpoint_mask": "0x0" 00:05:16.685 } 00:05:16.685 }' 00:05:16.685 19:03:26 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:16.685 19:03:26 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:05:16.685 19:03:26 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:16.685 19:03:26 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:16.685 19:03:26 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:16.685 19:03:26 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:16.685 19:03:26 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:16.685 19:03:26 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:16.685 19:03:26 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:16.685 19:03:26 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:16.685 00:05:16.685 real 0m0.255s 00:05:16.685 user 0m0.195s 00:05:16.685 sys 0m0.048s 00:05:16.685 19:03:26 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:16.685 19:03:26 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:16.685 ************************************ 00:05:16.685 END TEST rpc_trace_cmd_test 00:05:16.685 ************************************ 00:05:16.944 19:03:26 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:16.944 19:03:26 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:16.944 19:03:26 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:16.944 19:03:26 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:16.944 19:03:26 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:16.944 19:03:26 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:16.944 ************************************ 00:05:16.944 START TEST rpc_daemon_integrity 00:05:16.944 ************************************ 00:05:16.944 19:03:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:16.944 19:03:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:16.944 19:03:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:16.944 19:03:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:16.944 19:03:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:16.944 19:03:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:16.944 19:03:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:16.944 19:03:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:16.944 19:03:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:16.944 19:03:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:16.944 19:03:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:16.944 19:03:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:16.945 19:03:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:16.945 19:03:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:16.945 19:03:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:16.945 19:03:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:16.945 19:03:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:16.945 19:03:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:16.945 { 00:05:16.945 "name": "Malloc2", 00:05:16.945 "aliases": [ 00:05:16.945 "61ebcb18-ef51-4abf-987d-166b6388db08" 00:05:16.945 ], 00:05:16.945 "product_name": "Malloc disk", 00:05:16.945 "block_size": 512, 00:05:16.945 "num_blocks": 16384, 00:05:16.945 "uuid": "61ebcb18-ef51-4abf-987d-166b6388db08", 00:05:16.945 "assigned_rate_limits": { 00:05:16.945 "rw_ios_per_sec": 0, 00:05:16.945 "rw_mbytes_per_sec": 0, 00:05:16.945 "r_mbytes_per_sec": 0, 00:05:16.945 "w_mbytes_per_sec": 0 00:05:16.945 }, 00:05:16.945 "claimed": false, 00:05:16.945 "zoned": false, 00:05:16.945 "supported_io_types": { 00:05:16.945 "read": true, 00:05:16.945 "write": true, 00:05:16.945 "unmap": true, 00:05:16.945 "flush": true, 00:05:16.945 "reset": true, 00:05:16.945 "nvme_admin": false, 00:05:16.945 "nvme_io": false, 00:05:16.945 "nvme_io_md": false, 00:05:16.945 "write_zeroes": true, 00:05:16.945 "zcopy": true, 00:05:16.945 "get_zone_info": false, 00:05:16.945 "zone_management": false, 00:05:16.945 "zone_append": false, 00:05:16.945 "compare": false, 00:05:16.945 "compare_and_write": false, 00:05:16.945 "abort": true, 00:05:16.945 "seek_hole": false, 00:05:16.945 "seek_data": false, 00:05:16.945 "copy": true, 00:05:16.945 "nvme_iov_md": false 00:05:16.945 }, 00:05:16.945 "memory_domains": [ 00:05:16.945 { 00:05:16.945 "dma_device_id": "system", 00:05:16.945 "dma_device_type": 1 00:05:16.945 }, 00:05:16.945 { 00:05:16.945 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:16.945 "dma_device_type": 2 00:05:16.945 } 00:05:16.945 ], 00:05:16.945 "driver_specific": {} 00:05:16.945 } 00:05:16.945 ]' 00:05:16.945 19:03:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:16.945 19:03:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:16.945 19:03:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:16.945 19:03:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:16.945 19:03:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:16.945 [2024-11-27 19:03:26.526339] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:16.945 [2024-11-27 19:03:26.526446] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:16.945 [2024-11-27 19:03:26.526473] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:05:16.945 [2024-11-27 19:03:26.526485] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:16.945 [2024-11-27 19:03:26.528951] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:16.945 [2024-11-27 19:03:26.528993] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:16.945 Passthru0 00:05:16.945 19:03:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:16.945 19:03:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:16.945 19:03:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:16.945 19:03:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:16.945 19:03:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:16.945 19:03:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:16.945 { 00:05:16.945 "name": "Malloc2", 00:05:16.945 "aliases": [ 00:05:16.945 "61ebcb18-ef51-4abf-987d-166b6388db08" 00:05:16.945 ], 00:05:16.945 "product_name": "Malloc disk", 00:05:16.945 "block_size": 512, 00:05:16.945 "num_blocks": 16384, 00:05:16.945 "uuid": "61ebcb18-ef51-4abf-987d-166b6388db08", 00:05:16.945 "assigned_rate_limits": { 00:05:16.945 "rw_ios_per_sec": 0, 00:05:16.945 "rw_mbytes_per_sec": 0, 00:05:16.945 "r_mbytes_per_sec": 0, 00:05:16.945 "w_mbytes_per_sec": 0 00:05:16.945 }, 00:05:16.945 "claimed": true, 00:05:16.945 "claim_type": "exclusive_write", 00:05:16.945 "zoned": false, 00:05:16.945 "supported_io_types": { 00:05:16.945 "read": true, 00:05:16.945 "write": true, 00:05:16.945 "unmap": true, 00:05:16.945 "flush": true, 00:05:16.945 "reset": true, 00:05:16.945 "nvme_admin": false, 00:05:16.945 "nvme_io": false, 00:05:16.945 "nvme_io_md": false, 00:05:16.945 "write_zeroes": true, 00:05:16.945 "zcopy": true, 00:05:16.945 "get_zone_info": false, 00:05:16.945 "zone_management": false, 00:05:16.945 "zone_append": false, 00:05:16.945 "compare": false, 00:05:16.945 "compare_and_write": false, 00:05:16.945 "abort": true, 00:05:16.945 "seek_hole": false, 00:05:16.945 "seek_data": false, 00:05:16.945 "copy": true, 00:05:16.945 "nvme_iov_md": false 00:05:16.945 }, 00:05:16.945 "memory_domains": [ 00:05:16.945 { 00:05:16.945 "dma_device_id": "system", 00:05:16.945 "dma_device_type": 1 00:05:16.945 }, 00:05:16.945 { 00:05:16.945 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:16.945 "dma_device_type": 2 00:05:16.945 } 00:05:16.945 ], 00:05:16.945 "driver_specific": {} 00:05:16.945 }, 00:05:16.945 { 00:05:16.945 "name": "Passthru0", 00:05:16.945 "aliases": [ 00:05:16.945 "5974d89a-ec09-5b3e-84f6-6fc4fdfc8f69" 00:05:16.945 ], 00:05:16.945 "product_name": "passthru", 00:05:16.945 "block_size": 512, 00:05:16.945 "num_blocks": 16384, 00:05:16.945 "uuid": "5974d89a-ec09-5b3e-84f6-6fc4fdfc8f69", 00:05:16.945 "assigned_rate_limits": { 00:05:16.945 "rw_ios_per_sec": 0, 00:05:16.945 "rw_mbytes_per_sec": 0, 00:05:16.945 "r_mbytes_per_sec": 0, 00:05:16.945 "w_mbytes_per_sec": 0 00:05:16.945 }, 00:05:16.945 "claimed": false, 00:05:16.945 "zoned": false, 00:05:16.945 "supported_io_types": { 00:05:16.945 "read": true, 00:05:16.945 "write": true, 00:05:16.945 "unmap": true, 00:05:16.945 "flush": true, 00:05:16.945 "reset": true, 00:05:16.945 "nvme_admin": false, 00:05:16.945 "nvme_io": false, 00:05:16.945 "nvme_io_md": false, 00:05:16.945 "write_zeroes": true, 00:05:16.945 "zcopy": true, 00:05:16.945 "get_zone_info": false, 00:05:16.945 "zone_management": false, 00:05:16.945 "zone_append": false, 00:05:16.945 "compare": false, 00:05:16.945 "compare_and_write": false, 00:05:16.945 "abort": true, 00:05:16.945 "seek_hole": false, 00:05:16.945 "seek_data": false, 00:05:16.945 "copy": true, 00:05:16.945 "nvme_iov_md": false 00:05:16.945 }, 00:05:16.945 "memory_domains": [ 00:05:16.945 { 00:05:16.945 "dma_device_id": "system", 00:05:16.945 "dma_device_type": 1 00:05:16.945 }, 00:05:16.945 { 00:05:16.945 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:16.945 "dma_device_type": 2 00:05:16.945 } 00:05:16.945 ], 00:05:16.945 "driver_specific": { 00:05:16.945 "passthru": { 00:05:16.945 "name": "Passthru0", 00:05:16.945 "base_bdev_name": "Malloc2" 00:05:16.945 } 00:05:16.945 } 00:05:16.945 } 00:05:16.945 ]' 00:05:16.945 19:03:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:17.204 19:03:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:17.204 19:03:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:17.204 19:03:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:17.204 19:03:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:17.204 19:03:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:17.204 19:03:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:17.204 19:03:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:17.204 19:03:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:17.204 19:03:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:17.204 19:03:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:17.204 19:03:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:17.204 19:03:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:17.204 19:03:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:17.204 19:03:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:17.204 19:03:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:17.204 ************************************ 00:05:17.204 END TEST rpc_daemon_integrity 00:05:17.204 ************************************ 00:05:17.204 19:03:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:17.204 00:05:17.204 real 0m0.361s 00:05:17.204 user 0m0.203s 00:05:17.204 sys 0m0.053s 00:05:17.204 19:03:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:17.204 19:03:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:17.204 19:03:26 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:17.204 19:03:26 rpc -- rpc/rpc.sh@84 -- # killprocess 56944 00:05:17.204 19:03:26 rpc -- common/autotest_common.sh@954 -- # '[' -z 56944 ']' 00:05:17.204 19:03:26 rpc -- common/autotest_common.sh@958 -- # kill -0 56944 00:05:17.204 19:03:26 rpc -- common/autotest_common.sh@959 -- # uname 00:05:17.204 19:03:26 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:17.204 19:03:26 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 56944 00:05:17.204 killing process with pid 56944 00:05:17.204 19:03:26 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:17.204 19:03:26 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:17.204 19:03:26 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 56944' 00:05:17.204 19:03:26 rpc -- common/autotest_common.sh@973 -- # kill 56944 00:05:17.204 19:03:26 rpc -- common/autotest_common.sh@978 -- # wait 56944 00:05:20.488 00:05:20.488 real 0m5.691s 00:05:20.488 user 0m6.057s 00:05:20.488 sys 0m1.137s 00:05:20.488 19:03:29 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:20.488 ************************************ 00:05:20.488 END TEST rpc 00:05:20.488 ************************************ 00:05:20.488 19:03:29 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:20.488 19:03:29 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:20.488 19:03:29 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:20.488 19:03:29 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:20.488 19:03:29 -- common/autotest_common.sh@10 -- # set +x 00:05:20.488 ************************************ 00:05:20.488 START TEST skip_rpc 00:05:20.488 ************************************ 00:05:20.488 19:03:29 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:20.488 * Looking for test storage... 00:05:20.488 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:20.488 19:03:29 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:20.488 19:03:29 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:20.488 19:03:29 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:20.488 19:03:29 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:20.488 19:03:29 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:20.488 19:03:29 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:20.488 19:03:29 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:20.488 19:03:29 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:20.488 19:03:29 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:20.488 19:03:29 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:20.488 19:03:29 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:20.488 19:03:29 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:20.488 19:03:29 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:20.488 19:03:29 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:20.488 19:03:29 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:20.488 19:03:29 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:20.488 19:03:29 skip_rpc -- scripts/common.sh@345 -- # : 1 00:05:20.488 19:03:29 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:20.488 19:03:29 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:20.488 19:03:29 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:20.488 19:03:29 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:05:20.488 19:03:29 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:20.488 19:03:29 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:05:20.488 19:03:29 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:20.488 19:03:29 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:20.488 19:03:29 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:05:20.488 19:03:29 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:20.488 19:03:29 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:05:20.488 19:03:29 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:20.488 19:03:29 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:20.488 19:03:29 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:20.488 19:03:29 skip_rpc -- scripts/common.sh@368 -- # return 0 00:05:20.488 19:03:29 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:20.488 19:03:29 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:20.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.488 --rc genhtml_branch_coverage=1 00:05:20.488 --rc genhtml_function_coverage=1 00:05:20.488 --rc genhtml_legend=1 00:05:20.488 --rc geninfo_all_blocks=1 00:05:20.488 --rc geninfo_unexecuted_blocks=1 00:05:20.488 00:05:20.488 ' 00:05:20.488 19:03:29 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:20.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.488 --rc genhtml_branch_coverage=1 00:05:20.488 --rc genhtml_function_coverage=1 00:05:20.488 --rc genhtml_legend=1 00:05:20.488 --rc geninfo_all_blocks=1 00:05:20.488 --rc geninfo_unexecuted_blocks=1 00:05:20.488 00:05:20.488 ' 00:05:20.488 19:03:29 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:20.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.488 --rc genhtml_branch_coverage=1 00:05:20.488 --rc genhtml_function_coverage=1 00:05:20.488 --rc genhtml_legend=1 00:05:20.488 --rc geninfo_all_blocks=1 00:05:20.488 --rc geninfo_unexecuted_blocks=1 00:05:20.488 00:05:20.488 ' 00:05:20.488 19:03:29 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:20.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.488 --rc genhtml_branch_coverage=1 00:05:20.488 --rc genhtml_function_coverage=1 00:05:20.488 --rc genhtml_legend=1 00:05:20.488 --rc geninfo_all_blocks=1 00:05:20.488 --rc geninfo_unexecuted_blocks=1 00:05:20.488 00:05:20.488 ' 00:05:20.488 19:03:29 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:20.488 19:03:29 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:20.488 19:03:29 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:20.488 19:03:29 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:20.488 19:03:29 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:20.488 19:03:29 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:20.488 ************************************ 00:05:20.488 START TEST skip_rpc 00:05:20.488 ************************************ 00:05:20.488 19:03:29 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:05:20.488 19:03:29 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=57184 00:05:20.488 19:03:29 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:20.488 19:03:29 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:20.489 19:03:29 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:20.489 [2024-11-27 19:03:29.796063] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:05:20.489 [2024-11-27 19:03:29.796175] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57184 ] 00:05:20.489 [2024-11-27 19:03:29.970001] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:20.489 [2024-11-27 19:03:30.107946] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.754 19:03:34 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:25.754 19:03:34 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:25.754 19:03:34 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:25.754 19:03:34 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:25.754 19:03:34 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:25.754 19:03:34 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:25.754 19:03:34 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:25.754 19:03:34 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:05:25.754 19:03:34 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:25.754 19:03:34 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:25.754 19:03:34 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:25.754 19:03:34 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:25.754 19:03:34 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:25.754 19:03:34 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:25.754 19:03:34 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:25.754 19:03:34 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:25.754 19:03:34 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 57184 00:05:25.754 19:03:34 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 57184 ']' 00:05:25.754 19:03:34 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 57184 00:05:25.754 19:03:34 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:05:25.754 19:03:34 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:25.754 19:03:34 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57184 00:05:25.754 killing process with pid 57184 00:05:25.754 19:03:34 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:25.754 19:03:34 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:25.754 19:03:34 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57184' 00:05:25.754 19:03:34 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 57184 00:05:25.754 19:03:34 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 57184 00:05:28.288 ************************************ 00:05:28.288 END TEST skip_rpc 00:05:28.288 ************************************ 00:05:28.288 00:05:28.288 real 0m7.618s 00:05:28.288 user 0m6.998s 00:05:28.288 sys 0m0.541s 00:05:28.288 19:03:37 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:28.288 19:03:37 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:28.288 19:03:37 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:28.288 19:03:37 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:28.288 19:03:37 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:28.288 19:03:37 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:28.288 ************************************ 00:05:28.288 START TEST skip_rpc_with_json 00:05:28.288 ************************************ 00:05:28.288 19:03:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:05:28.288 19:03:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:28.288 19:03:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57288 00:05:28.288 19:03:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:28.288 19:03:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:28.288 19:03:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57288 00:05:28.288 19:03:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 57288 ']' 00:05:28.288 19:03:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:28.288 19:03:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:28.288 19:03:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:28.288 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:28.288 19:03:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:28.288 19:03:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:28.288 [2024-11-27 19:03:37.496905] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:05:28.288 [2024-11-27 19:03:37.497143] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57288 ] 00:05:28.288 [2024-11-27 19:03:37.675009] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:28.288 [2024-11-27 19:03:37.803633] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.224 19:03:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:29.224 19:03:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:05:29.224 19:03:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:29.224 19:03:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:29.224 19:03:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:29.224 [2024-11-27 19:03:38.798955] nvmf_rpc.c:2706:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:29.224 request: 00:05:29.224 { 00:05:29.224 "trtype": "tcp", 00:05:29.224 "method": "nvmf_get_transports", 00:05:29.224 "req_id": 1 00:05:29.224 } 00:05:29.224 Got JSON-RPC error response 00:05:29.224 response: 00:05:29.224 { 00:05:29.224 "code": -19, 00:05:29.224 "message": "No such device" 00:05:29.224 } 00:05:29.224 19:03:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:29.224 19:03:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:29.224 19:03:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:29.224 19:03:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:29.224 [2024-11-27 19:03:38.811049] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:29.224 19:03:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:29.224 19:03:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:29.224 19:03:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:29.224 19:03:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:29.486 19:03:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:29.486 19:03:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:29.486 { 00:05:29.486 "subsystems": [ 00:05:29.486 { 00:05:29.486 "subsystem": "fsdev", 00:05:29.486 "config": [ 00:05:29.486 { 00:05:29.486 "method": "fsdev_set_opts", 00:05:29.486 "params": { 00:05:29.486 "fsdev_io_pool_size": 65535, 00:05:29.486 "fsdev_io_cache_size": 256 00:05:29.486 } 00:05:29.486 } 00:05:29.486 ] 00:05:29.486 }, 00:05:29.486 { 00:05:29.486 "subsystem": "keyring", 00:05:29.486 "config": [] 00:05:29.486 }, 00:05:29.486 { 00:05:29.486 "subsystem": "iobuf", 00:05:29.486 "config": [ 00:05:29.486 { 00:05:29.486 "method": "iobuf_set_options", 00:05:29.486 "params": { 00:05:29.486 "small_pool_count": 8192, 00:05:29.486 "large_pool_count": 1024, 00:05:29.486 "small_bufsize": 8192, 00:05:29.486 "large_bufsize": 135168, 00:05:29.486 "enable_numa": false 00:05:29.486 } 00:05:29.486 } 00:05:29.486 ] 00:05:29.486 }, 00:05:29.486 { 00:05:29.486 "subsystem": "sock", 00:05:29.486 "config": [ 00:05:29.486 { 00:05:29.486 "method": "sock_set_default_impl", 00:05:29.486 "params": { 00:05:29.486 "impl_name": "posix" 00:05:29.486 } 00:05:29.486 }, 00:05:29.486 { 00:05:29.486 "method": "sock_impl_set_options", 00:05:29.486 "params": { 00:05:29.486 "impl_name": "ssl", 00:05:29.486 "recv_buf_size": 4096, 00:05:29.486 "send_buf_size": 4096, 00:05:29.486 "enable_recv_pipe": true, 00:05:29.486 "enable_quickack": false, 00:05:29.486 "enable_placement_id": 0, 00:05:29.486 "enable_zerocopy_send_server": true, 00:05:29.486 "enable_zerocopy_send_client": false, 00:05:29.486 "zerocopy_threshold": 0, 00:05:29.486 "tls_version": 0, 00:05:29.486 "enable_ktls": false 00:05:29.486 } 00:05:29.486 }, 00:05:29.486 { 00:05:29.486 "method": "sock_impl_set_options", 00:05:29.486 "params": { 00:05:29.486 "impl_name": "posix", 00:05:29.486 "recv_buf_size": 2097152, 00:05:29.486 "send_buf_size": 2097152, 00:05:29.486 "enable_recv_pipe": true, 00:05:29.486 "enable_quickack": false, 00:05:29.486 "enable_placement_id": 0, 00:05:29.486 "enable_zerocopy_send_server": true, 00:05:29.486 "enable_zerocopy_send_client": false, 00:05:29.486 "zerocopy_threshold": 0, 00:05:29.486 "tls_version": 0, 00:05:29.486 "enable_ktls": false 00:05:29.486 } 00:05:29.486 } 00:05:29.486 ] 00:05:29.486 }, 00:05:29.486 { 00:05:29.486 "subsystem": "vmd", 00:05:29.486 "config": [] 00:05:29.486 }, 00:05:29.486 { 00:05:29.486 "subsystem": "accel", 00:05:29.486 "config": [ 00:05:29.486 { 00:05:29.486 "method": "accel_set_options", 00:05:29.486 "params": { 00:05:29.486 "small_cache_size": 128, 00:05:29.486 "large_cache_size": 16, 00:05:29.486 "task_count": 2048, 00:05:29.486 "sequence_count": 2048, 00:05:29.486 "buf_count": 2048 00:05:29.486 } 00:05:29.486 } 00:05:29.486 ] 00:05:29.486 }, 00:05:29.486 { 00:05:29.486 "subsystem": "bdev", 00:05:29.486 "config": [ 00:05:29.486 { 00:05:29.486 "method": "bdev_set_options", 00:05:29.486 "params": { 00:05:29.486 "bdev_io_pool_size": 65535, 00:05:29.486 "bdev_io_cache_size": 256, 00:05:29.486 "bdev_auto_examine": true, 00:05:29.486 "iobuf_small_cache_size": 128, 00:05:29.486 "iobuf_large_cache_size": 16 00:05:29.486 } 00:05:29.486 }, 00:05:29.486 { 00:05:29.486 "method": "bdev_raid_set_options", 00:05:29.486 "params": { 00:05:29.486 "process_window_size_kb": 1024, 00:05:29.486 "process_max_bandwidth_mb_sec": 0 00:05:29.486 } 00:05:29.486 }, 00:05:29.486 { 00:05:29.486 "method": "bdev_iscsi_set_options", 00:05:29.486 "params": { 00:05:29.486 "timeout_sec": 30 00:05:29.486 } 00:05:29.486 }, 00:05:29.486 { 00:05:29.486 "method": "bdev_nvme_set_options", 00:05:29.486 "params": { 00:05:29.486 "action_on_timeout": "none", 00:05:29.486 "timeout_us": 0, 00:05:29.486 "timeout_admin_us": 0, 00:05:29.486 "keep_alive_timeout_ms": 10000, 00:05:29.486 "arbitration_burst": 0, 00:05:29.486 "low_priority_weight": 0, 00:05:29.486 "medium_priority_weight": 0, 00:05:29.486 "high_priority_weight": 0, 00:05:29.486 "nvme_adminq_poll_period_us": 10000, 00:05:29.486 "nvme_ioq_poll_period_us": 0, 00:05:29.486 "io_queue_requests": 0, 00:05:29.486 "delay_cmd_submit": true, 00:05:29.486 "transport_retry_count": 4, 00:05:29.486 "bdev_retry_count": 3, 00:05:29.486 "transport_ack_timeout": 0, 00:05:29.486 "ctrlr_loss_timeout_sec": 0, 00:05:29.486 "reconnect_delay_sec": 0, 00:05:29.486 "fast_io_fail_timeout_sec": 0, 00:05:29.486 "disable_auto_failback": false, 00:05:29.486 "generate_uuids": false, 00:05:29.486 "transport_tos": 0, 00:05:29.486 "nvme_error_stat": false, 00:05:29.486 "rdma_srq_size": 0, 00:05:29.486 "io_path_stat": false, 00:05:29.486 "allow_accel_sequence": false, 00:05:29.486 "rdma_max_cq_size": 0, 00:05:29.486 "rdma_cm_event_timeout_ms": 0, 00:05:29.486 "dhchap_digests": [ 00:05:29.486 "sha256", 00:05:29.486 "sha384", 00:05:29.486 "sha512" 00:05:29.487 ], 00:05:29.487 "dhchap_dhgroups": [ 00:05:29.487 "null", 00:05:29.487 "ffdhe2048", 00:05:29.487 "ffdhe3072", 00:05:29.487 "ffdhe4096", 00:05:29.487 "ffdhe6144", 00:05:29.487 "ffdhe8192" 00:05:29.487 ] 00:05:29.487 } 00:05:29.487 }, 00:05:29.487 { 00:05:29.487 "method": "bdev_nvme_set_hotplug", 00:05:29.487 "params": { 00:05:29.487 "period_us": 100000, 00:05:29.487 "enable": false 00:05:29.487 } 00:05:29.487 }, 00:05:29.487 { 00:05:29.487 "method": "bdev_wait_for_examine" 00:05:29.487 } 00:05:29.487 ] 00:05:29.487 }, 00:05:29.487 { 00:05:29.487 "subsystem": "scsi", 00:05:29.487 "config": null 00:05:29.487 }, 00:05:29.487 { 00:05:29.487 "subsystem": "scheduler", 00:05:29.487 "config": [ 00:05:29.487 { 00:05:29.487 "method": "framework_set_scheduler", 00:05:29.487 "params": { 00:05:29.487 "name": "static" 00:05:29.487 } 00:05:29.487 } 00:05:29.487 ] 00:05:29.487 }, 00:05:29.487 { 00:05:29.487 "subsystem": "vhost_scsi", 00:05:29.487 "config": [] 00:05:29.487 }, 00:05:29.487 { 00:05:29.487 "subsystem": "vhost_blk", 00:05:29.487 "config": [] 00:05:29.487 }, 00:05:29.487 { 00:05:29.487 "subsystem": "ublk", 00:05:29.487 "config": [] 00:05:29.487 }, 00:05:29.487 { 00:05:29.487 "subsystem": "nbd", 00:05:29.487 "config": [] 00:05:29.487 }, 00:05:29.487 { 00:05:29.487 "subsystem": "nvmf", 00:05:29.487 "config": [ 00:05:29.487 { 00:05:29.487 "method": "nvmf_set_config", 00:05:29.487 "params": { 00:05:29.487 "discovery_filter": "match_any", 00:05:29.487 "admin_cmd_passthru": { 00:05:29.487 "identify_ctrlr": false 00:05:29.487 }, 00:05:29.487 "dhchap_digests": [ 00:05:29.487 "sha256", 00:05:29.487 "sha384", 00:05:29.487 "sha512" 00:05:29.487 ], 00:05:29.487 "dhchap_dhgroups": [ 00:05:29.487 "null", 00:05:29.487 "ffdhe2048", 00:05:29.487 "ffdhe3072", 00:05:29.487 "ffdhe4096", 00:05:29.487 "ffdhe6144", 00:05:29.487 "ffdhe8192" 00:05:29.487 ] 00:05:29.487 } 00:05:29.487 }, 00:05:29.487 { 00:05:29.487 "method": "nvmf_set_max_subsystems", 00:05:29.487 "params": { 00:05:29.487 "max_subsystems": 1024 00:05:29.487 } 00:05:29.487 }, 00:05:29.487 { 00:05:29.487 "method": "nvmf_set_crdt", 00:05:29.487 "params": { 00:05:29.487 "crdt1": 0, 00:05:29.487 "crdt2": 0, 00:05:29.487 "crdt3": 0 00:05:29.487 } 00:05:29.487 }, 00:05:29.487 { 00:05:29.487 "method": "nvmf_create_transport", 00:05:29.487 "params": { 00:05:29.487 "trtype": "TCP", 00:05:29.487 "max_queue_depth": 128, 00:05:29.487 "max_io_qpairs_per_ctrlr": 127, 00:05:29.487 "in_capsule_data_size": 4096, 00:05:29.487 "max_io_size": 131072, 00:05:29.487 "io_unit_size": 131072, 00:05:29.487 "max_aq_depth": 128, 00:05:29.487 "num_shared_buffers": 511, 00:05:29.487 "buf_cache_size": 4294967295, 00:05:29.487 "dif_insert_or_strip": false, 00:05:29.487 "zcopy": false, 00:05:29.487 "c2h_success": true, 00:05:29.487 "sock_priority": 0, 00:05:29.487 "abort_timeout_sec": 1, 00:05:29.487 "ack_timeout": 0, 00:05:29.487 "data_wr_pool_size": 0 00:05:29.487 } 00:05:29.487 } 00:05:29.487 ] 00:05:29.487 }, 00:05:29.487 { 00:05:29.487 "subsystem": "iscsi", 00:05:29.487 "config": [ 00:05:29.487 { 00:05:29.487 "method": "iscsi_set_options", 00:05:29.487 "params": { 00:05:29.487 "node_base": "iqn.2016-06.io.spdk", 00:05:29.487 "max_sessions": 128, 00:05:29.487 "max_connections_per_session": 2, 00:05:29.487 "max_queue_depth": 64, 00:05:29.487 "default_time2wait": 2, 00:05:29.487 "default_time2retain": 20, 00:05:29.487 "first_burst_length": 8192, 00:05:29.487 "immediate_data": true, 00:05:29.487 "allow_duplicated_isid": false, 00:05:29.487 "error_recovery_level": 0, 00:05:29.487 "nop_timeout": 60, 00:05:29.487 "nop_in_interval": 30, 00:05:29.487 "disable_chap": false, 00:05:29.487 "require_chap": false, 00:05:29.487 "mutual_chap": false, 00:05:29.487 "chap_group": 0, 00:05:29.487 "max_large_datain_per_connection": 64, 00:05:29.487 "max_r2t_per_connection": 4, 00:05:29.487 "pdu_pool_size": 36864, 00:05:29.487 "immediate_data_pool_size": 16384, 00:05:29.487 "data_out_pool_size": 2048 00:05:29.487 } 00:05:29.487 } 00:05:29.487 ] 00:05:29.487 } 00:05:29.487 ] 00:05:29.487 } 00:05:29.487 19:03:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:29.487 19:03:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57288 00:05:29.487 19:03:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57288 ']' 00:05:29.487 19:03:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57288 00:05:29.487 19:03:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:29.487 19:03:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:29.487 19:03:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57288 00:05:29.487 19:03:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:29.487 19:03:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:29.487 19:03:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57288' 00:05:29.487 killing process with pid 57288 00:05:29.487 19:03:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57288 00:05:29.487 19:03:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57288 00:05:32.064 19:03:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57344 00:05:32.065 19:03:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:32.065 19:03:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:37.334 19:03:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57344 00:05:37.334 19:03:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57344 ']' 00:05:37.334 19:03:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57344 00:05:37.334 19:03:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:37.334 19:03:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:37.334 19:03:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57344 00:05:37.334 19:03:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:37.334 killing process with pid 57344 00:05:37.334 19:03:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:37.334 19:03:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57344' 00:05:37.334 19:03:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57344 00:05:37.334 19:03:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57344 00:05:39.866 19:03:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:39.866 19:03:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:39.866 ************************************ 00:05:39.866 END TEST skip_rpc_with_json 00:05:39.866 ************************************ 00:05:39.866 00:05:39.866 real 0m11.757s 00:05:39.866 user 0m10.839s 00:05:39.866 sys 0m1.209s 00:05:39.866 19:03:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:39.866 19:03:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:39.866 19:03:49 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:39.866 19:03:49 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:39.866 19:03:49 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:39.866 19:03:49 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:39.866 ************************************ 00:05:39.866 START TEST skip_rpc_with_delay 00:05:39.866 ************************************ 00:05:39.866 19:03:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:05:39.866 19:03:49 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:39.866 19:03:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:05:39.866 19:03:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:39.866 19:03:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:39.866 19:03:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:39.866 19:03:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:39.866 19:03:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:39.866 19:03:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:39.866 19:03:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:39.866 19:03:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:39.866 19:03:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:39.866 19:03:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:39.866 [2024-11-27 19:03:49.328871] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:39.866 19:03:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:05:39.866 19:03:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:39.866 19:03:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:39.866 19:03:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:39.866 00:05:39.866 real 0m0.177s 00:05:39.866 user 0m0.091s 00:05:39.866 sys 0m0.083s 00:05:39.866 19:03:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:39.866 19:03:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:39.866 ************************************ 00:05:39.866 END TEST skip_rpc_with_delay 00:05:39.866 ************************************ 00:05:39.866 19:03:49 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:39.866 19:03:49 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:39.866 19:03:49 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:39.866 19:03:49 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:39.866 19:03:49 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:39.866 19:03:49 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:39.866 ************************************ 00:05:39.866 START TEST exit_on_failed_rpc_init 00:05:39.866 ************************************ 00:05:39.866 19:03:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:05:39.866 19:03:49 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57483 00:05:39.866 19:03:49 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:39.866 19:03:49 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57483 00:05:39.866 19:03:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 57483 ']' 00:05:39.866 19:03:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:39.866 19:03:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:39.866 19:03:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:39.866 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:39.866 19:03:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:39.866 19:03:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:40.124 [2024-11-27 19:03:49.576950] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:05:40.124 [2024-11-27 19:03:49.577185] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57483 ] 00:05:40.124 [2024-11-27 19:03:49.752361] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.383 [2024-11-27 19:03:49.886400] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.319 19:03:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:41.319 19:03:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:05:41.319 19:03:50 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:41.319 19:03:50 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:41.319 19:03:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:05:41.319 19:03:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:41.319 19:03:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:41.319 19:03:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:41.319 19:03:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:41.319 19:03:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:41.319 19:03:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:41.319 19:03:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:41.319 19:03:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:41.319 19:03:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:41.319 19:03:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:41.578 [2024-11-27 19:03:50.989254] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:05:41.578 [2024-11-27 19:03:50.989384] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57501 ] 00:05:41.578 [2024-11-27 19:03:51.170297] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.836 [2024-11-27 19:03:51.281594] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:41.836 [2024-11-27 19:03:51.281707] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:41.836 [2024-11-27 19:03:51.281723] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:41.836 [2024-11-27 19:03:51.281736] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:42.093 19:03:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:05:42.093 19:03:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:42.093 19:03:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:05:42.093 19:03:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:05:42.093 19:03:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:05:42.093 19:03:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:42.093 19:03:51 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:42.093 19:03:51 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57483 00:05:42.093 19:03:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 57483 ']' 00:05:42.093 19:03:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 57483 00:05:42.093 19:03:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:05:42.093 19:03:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:42.093 19:03:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57483 00:05:42.093 killing process with pid 57483 00:05:42.093 19:03:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:42.093 19:03:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:42.093 19:03:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57483' 00:05:42.093 19:03:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 57483 00:05:42.093 19:03:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 57483 00:05:44.645 00:05:44.645 real 0m4.651s 00:05:44.645 user 0m4.741s 00:05:44.645 sys 0m0.791s 00:05:44.645 19:03:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:44.645 ************************************ 00:05:44.645 END TEST exit_on_failed_rpc_init 00:05:44.645 ************************************ 00:05:44.645 19:03:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:44.645 19:03:54 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:44.645 00:05:44.645 real 0m24.726s 00:05:44.645 user 0m22.872s 00:05:44.645 sys 0m2.962s 00:05:44.645 19:03:54 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:44.645 ************************************ 00:05:44.645 END TEST skip_rpc 00:05:44.645 ************************************ 00:05:44.645 19:03:54 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:44.645 19:03:54 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:44.645 19:03:54 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:44.645 19:03:54 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:44.645 19:03:54 -- common/autotest_common.sh@10 -- # set +x 00:05:44.645 ************************************ 00:05:44.645 START TEST rpc_client 00:05:44.645 ************************************ 00:05:44.645 19:03:54 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:44.905 * Looking for test storage... 00:05:44.905 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:44.905 19:03:54 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:44.905 19:03:54 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:05:44.905 19:03:54 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:44.905 19:03:54 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:44.905 19:03:54 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:44.905 19:03:54 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:44.905 19:03:54 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:44.905 19:03:54 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:05:44.905 19:03:54 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:05:44.905 19:03:54 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:05:44.905 19:03:54 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:05:44.905 19:03:54 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:05:44.905 19:03:54 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:05:44.905 19:03:54 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:05:44.905 19:03:54 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:44.905 19:03:54 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:05:44.905 19:03:54 rpc_client -- scripts/common.sh@345 -- # : 1 00:05:44.905 19:03:54 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:44.905 19:03:54 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:44.905 19:03:54 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:05:44.905 19:03:54 rpc_client -- scripts/common.sh@353 -- # local d=1 00:05:44.905 19:03:54 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:44.905 19:03:54 rpc_client -- scripts/common.sh@355 -- # echo 1 00:05:44.905 19:03:54 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:05:44.905 19:03:54 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:05:44.905 19:03:54 rpc_client -- scripts/common.sh@353 -- # local d=2 00:05:44.905 19:03:54 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:44.905 19:03:54 rpc_client -- scripts/common.sh@355 -- # echo 2 00:05:44.905 19:03:54 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:05:44.905 19:03:54 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:44.905 19:03:54 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:44.905 19:03:54 rpc_client -- scripts/common.sh@368 -- # return 0 00:05:44.905 19:03:54 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:44.905 19:03:54 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:44.905 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.905 --rc genhtml_branch_coverage=1 00:05:44.905 --rc genhtml_function_coverage=1 00:05:44.905 --rc genhtml_legend=1 00:05:44.905 --rc geninfo_all_blocks=1 00:05:44.905 --rc geninfo_unexecuted_blocks=1 00:05:44.905 00:05:44.905 ' 00:05:44.905 19:03:54 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:44.905 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.905 --rc genhtml_branch_coverage=1 00:05:44.905 --rc genhtml_function_coverage=1 00:05:44.905 --rc genhtml_legend=1 00:05:44.905 --rc geninfo_all_blocks=1 00:05:44.905 --rc geninfo_unexecuted_blocks=1 00:05:44.905 00:05:44.905 ' 00:05:44.905 19:03:54 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:44.905 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.905 --rc genhtml_branch_coverage=1 00:05:44.905 --rc genhtml_function_coverage=1 00:05:44.905 --rc genhtml_legend=1 00:05:44.905 --rc geninfo_all_blocks=1 00:05:44.905 --rc geninfo_unexecuted_blocks=1 00:05:44.905 00:05:44.905 ' 00:05:44.905 19:03:54 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:44.905 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.905 --rc genhtml_branch_coverage=1 00:05:44.905 --rc genhtml_function_coverage=1 00:05:44.905 --rc genhtml_legend=1 00:05:44.905 --rc geninfo_all_blocks=1 00:05:44.905 --rc geninfo_unexecuted_blocks=1 00:05:44.905 00:05:44.905 ' 00:05:44.905 19:03:54 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:44.905 OK 00:05:45.164 19:03:54 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:45.164 00:05:45.164 real 0m0.316s 00:05:45.165 user 0m0.153s 00:05:45.165 sys 0m0.177s 00:05:45.165 19:03:54 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:45.165 19:03:54 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:45.165 ************************************ 00:05:45.165 END TEST rpc_client 00:05:45.165 ************************************ 00:05:45.165 19:03:54 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:45.165 19:03:54 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:45.165 19:03:54 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:45.165 19:03:54 -- common/autotest_common.sh@10 -- # set +x 00:05:45.165 ************************************ 00:05:45.165 START TEST json_config 00:05:45.165 ************************************ 00:05:45.165 19:03:54 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:45.165 19:03:54 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:45.165 19:03:54 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:05:45.165 19:03:54 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:45.425 19:03:54 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:45.425 19:03:54 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:45.425 19:03:54 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:45.425 19:03:54 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:45.425 19:03:54 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:05:45.425 19:03:54 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:05:45.425 19:03:54 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:05:45.425 19:03:54 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:05:45.425 19:03:54 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:05:45.425 19:03:54 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:05:45.425 19:03:54 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:05:45.425 19:03:54 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:45.425 19:03:54 json_config -- scripts/common.sh@344 -- # case "$op" in 00:05:45.425 19:03:54 json_config -- scripts/common.sh@345 -- # : 1 00:05:45.425 19:03:54 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:45.425 19:03:54 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:45.425 19:03:54 json_config -- scripts/common.sh@365 -- # decimal 1 00:05:45.425 19:03:54 json_config -- scripts/common.sh@353 -- # local d=1 00:05:45.425 19:03:54 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:45.425 19:03:54 json_config -- scripts/common.sh@355 -- # echo 1 00:05:45.425 19:03:54 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:05:45.425 19:03:54 json_config -- scripts/common.sh@366 -- # decimal 2 00:05:45.425 19:03:54 json_config -- scripts/common.sh@353 -- # local d=2 00:05:45.425 19:03:54 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:45.425 19:03:54 json_config -- scripts/common.sh@355 -- # echo 2 00:05:45.425 19:03:54 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:05:45.425 19:03:54 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:45.425 19:03:54 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:45.425 19:03:54 json_config -- scripts/common.sh@368 -- # return 0 00:05:45.425 19:03:54 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:45.425 19:03:54 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:45.425 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.425 --rc genhtml_branch_coverage=1 00:05:45.425 --rc genhtml_function_coverage=1 00:05:45.425 --rc genhtml_legend=1 00:05:45.425 --rc geninfo_all_blocks=1 00:05:45.425 --rc geninfo_unexecuted_blocks=1 00:05:45.425 00:05:45.425 ' 00:05:45.425 19:03:54 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:45.425 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.425 --rc genhtml_branch_coverage=1 00:05:45.425 --rc genhtml_function_coverage=1 00:05:45.425 --rc genhtml_legend=1 00:05:45.425 --rc geninfo_all_blocks=1 00:05:45.425 --rc geninfo_unexecuted_blocks=1 00:05:45.425 00:05:45.425 ' 00:05:45.425 19:03:54 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:45.425 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.425 --rc genhtml_branch_coverage=1 00:05:45.425 --rc genhtml_function_coverage=1 00:05:45.425 --rc genhtml_legend=1 00:05:45.425 --rc geninfo_all_blocks=1 00:05:45.425 --rc geninfo_unexecuted_blocks=1 00:05:45.425 00:05:45.425 ' 00:05:45.425 19:03:54 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:45.425 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.425 --rc genhtml_branch_coverage=1 00:05:45.425 --rc genhtml_function_coverage=1 00:05:45.425 --rc genhtml_legend=1 00:05:45.425 --rc geninfo_all_blocks=1 00:05:45.425 --rc geninfo_unexecuted_blocks=1 00:05:45.425 00:05:45.425 ' 00:05:45.425 19:03:54 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:45.425 19:03:54 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:45.425 19:03:54 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:45.425 19:03:54 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:45.425 19:03:54 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:45.425 19:03:54 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:45.425 19:03:54 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:45.425 19:03:54 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:45.425 19:03:54 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:45.425 19:03:54 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:45.425 19:03:54 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:45.425 19:03:54 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:45.425 19:03:54 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f596d6fb-0518-4483-83ba-bd5f5a3cc19e 00:05:45.425 19:03:54 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=f596d6fb-0518-4483-83ba-bd5f5a3cc19e 00:05:45.425 19:03:54 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:45.425 19:03:54 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:45.425 19:03:54 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:45.425 19:03:54 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:45.425 19:03:54 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:45.425 19:03:54 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:05:45.425 19:03:54 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:45.425 19:03:54 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:45.425 19:03:54 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:45.425 19:03:54 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:45.425 19:03:54 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:45.425 19:03:54 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:45.425 19:03:54 json_config -- paths/export.sh@5 -- # export PATH 00:05:45.425 19:03:54 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:45.425 19:03:54 json_config -- nvmf/common.sh@51 -- # : 0 00:05:45.425 19:03:54 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:45.425 19:03:54 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:45.425 19:03:54 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:45.425 19:03:54 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:45.425 19:03:54 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:45.425 19:03:54 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:45.425 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:45.425 19:03:54 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:45.425 19:03:54 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:45.425 19:03:54 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:45.425 19:03:54 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:45.425 19:03:54 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:45.425 19:03:54 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:45.425 19:03:54 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:45.425 19:03:54 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:45.425 19:03:54 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:05:45.425 WARNING: No tests are enabled so not running JSON configuration tests 00:05:45.425 19:03:54 json_config -- json_config/json_config.sh@28 -- # exit 0 00:05:45.425 00:05:45.425 real 0m0.218s 00:05:45.425 user 0m0.135s 00:05:45.425 sys 0m0.091s 00:05:45.425 19:03:54 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:45.425 19:03:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:45.425 ************************************ 00:05:45.425 END TEST json_config 00:05:45.425 ************************************ 00:05:45.425 19:03:54 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:45.425 19:03:54 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:45.425 19:03:54 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:45.425 19:03:54 -- common/autotest_common.sh@10 -- # set +x 00:05:45.425 ************************************ 00:05:45.425 START TEST json_config_extra_key 00:05:45.425 ************************************ 00:05:45.425 19:03:54 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:45.425 19:03:54 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:45.426 19:03:54 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:05:45.426 19:03:54 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:45.686 19:03:55 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:45.686 19:03:55 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:45.686 19:03:55 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:45.686 19:03:55 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:45.686 19:03:55 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:45.686 19:03:55 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:45.686 19:03:55 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:45.686 19:03:55 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:45.686 19:03:55 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:45.686 19:03:55 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:45.686 19:03:55 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:45.686 19:03:55 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:45.686 19:03:55 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:45.686 19:03:55 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:45.686 19:03:55 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:45.686 19:03:55 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:45.686 19:03:55 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:45.686 19:03:55 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:45.686 19:03:55 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:45.686 19:03:55 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:45.686 19:03:55 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:45.686 19:03:55 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:45.686 19:03:55 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:45.686 19:03:55 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:45.686 19:03:55 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:45.686 19:03:55 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:45.686 19:03:55 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:45.686 19:03:55 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:45.686 19:03:55 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:45.686 19:03:55 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:45.686 19:03:55 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:45.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.686 --rc genhtml_branch_coverage=1 00:05:45.686 --rc genhtml_function_coverage=1 00:05:45.686 --rc genhtml_legend=1 00:05:45.686 --rc geninfo_all_blocks=1 00:05:45.686 --rc geninfo_unexecuted_blocks=1 00:05:45.686 00:05:45.686 ' 00:05:45.686 19:03:55 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:45.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.686 --rc genhtml_branch_coverage=1 00:05:45.686 --rc genhtml_function_coverage=1 00:05:45.686 --rc genhtml_legend=1 00:05:45.686 --rc geninfo_all_blocks=1 00:05:45.686 --rc geninfo_unexecuted_blocks=1 00:05:45.686 00:05:45.686 ' 00:05:45.686 19:03:55 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:45.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.686 --rc genhtml_branch_coverage=1 00:05:45.686 --rc genhtml_function_coverage=1 00:05:45.686 --rc genhtml_legend=1 00:05:45.686 --rc geninfo_all_blocks=1 00:05:45.686 --rc geninfo_unexecuted_blocks=1 00:05:45.686 00:05:45.686 ' 00:05:45.686 19:03:55 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:45.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.686 --rc genhtml_branch_coverage=1 00:05:45.686 --rc genhtml_function_coverage=1 00:05:45.686 --rc genhtml_legend=1 00:05:45.686 --rc geninfo_all_blocks=1 00:05:45.686 --rc geninfo_unexecuted_blocks=1 00:05:45.686 00:05:45.686 ' 00:05:45.686 19:03:55 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:45.686 19:03:55 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:45.686 19:03:55 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:45.686 19:03:55 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:45.686 19:03:55 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:45.686 19:03:55 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:45.686 19:03:55 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:45.686 19:03:55 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:45.686 19:03:55 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:45.686 19:03:55 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:45.686 19:03:55 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:45.686 19:03:55 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:45.686 19:03:55 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f596d6fb-0518-4483-83ba-bd5f5a3cc19e 00:05:45.686 19:03:55 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=f596d6fb-0518-4483-83ba-bd5f5a3cc19e 00:05:45.686 19:03:55 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:45.686 19:03:55 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:45.686 19:03:55 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:45.686 19:03:55 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:45.686 19:03:55 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:45.686 19:03:55 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:45.686 19:03:55 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:45.686 19:03:55 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:45.686 19:03:55 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:45.686 19:03:55 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:45.686 19:03:55 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:45.686 19:03:55 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:45.686 19:03:55 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:45.686 19:03:55 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:45.686 19:03:55 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:45.686 19:03:55 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:45.686 19:03:55 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:45.686 19:03:55 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:45.686 19:03:55 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:45.686 19:03:55 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:45.686 19:03:55 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:45.686 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:45.686 19:03:55 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:45.687 19:03:55 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:45.687 19:03:55 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:45.687 19:03:55 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:45.687 19:03:55 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:45.687 19:03:55 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:45.687 19:03:55 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:45.687 19:03:55 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:45.687 19:03:55 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:45.687 19:03:55 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:45.687 19:03:55 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:45.687 19:03:55 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:45.687 19:03:55 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:45.687 19:03:55 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:45.687 INFO: launching applications... 00:05:45.687 19:03:55 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:45.687 19:03:55 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:45.687 19:03:55 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:45.687 19:03:55 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:45.687 19:03:55 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:45.687 19:03:55 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:45.687 Waiting for target to run... 00:05:45.687 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:45.687 19:03:55 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:45.687 19:03:55 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:45.687 19:03:55 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57711 00:05:45.687 19:03:55 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:45.687 19:03:55 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57711 /var/tmp/spdk_tgt.sock 00:05:45.687 19:03:55 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 57711 ']' 00:05:45.687 19:03:55 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:45.687 19:03:55 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:45.687 19:03:55 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:45.687 19:03:55 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:45.687 19:03:55 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:45.687 19:03:55 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:45.687 [2024-11-27 19:03:55.244030] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:05:45.687 [2024-11-27 19:03:55.244240] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57711 ] 00:05:46.255 [2024-11-27 19:03:55.806497] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.513 [2024-11-27 19:03:55.924358] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.081 19:03:56 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:47.081 19:03:56 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:05:47.081 19:03:56 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:47.081 00:05:47.081 19:03:56 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:47.081 INFO: shutting down applications... 00:05:47.081 19:03:56 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:47.081 19:03:56 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:47.081 19:03:56 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:47.081 19:03:56 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57711 ]] 00:05:47.081 19:03:56 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57711 00:05:47.081 19:03:56 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:47.081 19:03:56 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:47.081 19:03:56 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57711 00:05:47.081 19:03:56 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:47.649 19:03:57 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:47.649 19:03:57 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:47.649 19:03:57 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57711 00:05:47.649 19:03:57 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:48.216 19:03:57 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:48.216 19:03:57 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:48.216 19:03:57 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57711 00:05:48.216 19:03:57 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:48.786 19:03:58 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:48.786 19:03:58 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:48.786 19:03:58 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57711 00:05:48.786 19:03:58 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:49.045 19:03:58 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:49.045 19:03:58 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:49.045 19:03:58 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57711 00:05:49.045 19:03:58 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:49.612 19:03:59 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:49.612 19:03:59 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:49.613 19:03:59 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57711 00:05:49.613 19:03:59 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:50.181 19:03:59 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:50.181 19:03:59 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:50.181 19:03:59 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57711 00:05:50.181 19:03:59 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:50.181 SPDK target shutdown done 00:05:50.181 Success 00:05:50.181 19:03:59 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:50.181 19:03:59 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:50.181 19:03:59 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:50.181 19:03:59 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:50.181 00:05:50.181 real 0m4.774s 00:05:50.181 user 0m4.215s 00:05:50.181 sys 0m0.790s 00:05:50.181 19:03:59 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:50.181 19:03:59 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:50.181 ************************************ 00:05:50.181 END TEST json_config_extra_key 00:05:50.181 ************************************ 00:05:50.181 19:03:59 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:50.181 19:03:59 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:50.181 19:03:59 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:50.181 19:03:59 -- common/autotest_common.sh@10 -- # set +x 00:05:50.181 ************************************ 00:05:50.181 START TEST alias_rpc 00:05:50.181 ************************************ 00:05:50.181 19:03:59 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:50.441 * Looking for test storage... 00:05:50.441 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:50.441 19:03:59 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:50.441 19:03:59 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:50.441 19:03:59 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:50.441 19:03:59 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:50.441 19:03:59 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:50.441 19:03:59 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:50.441 19:03:59 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:50.441 19:03:59 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:50.441 19:03:59 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:50.441 19:03:59 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:50.441 19:03:59 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:50.441 19:03:59 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:50.441 19:03:59 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:50.441 19:03:59 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:50.441 19:03:59 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:50.441 19:03:59 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:50.441 19:03:59 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:50.441 19:03:59 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:50.441 19:03:59 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:50.441 19:03:59 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:50.441 19:03:59 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:50.441 19:03:59 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:50.441 19:03:59 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:50.441 19:03:59 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:50.441 19:03:59 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:50.441 19:03:59 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:50.441 19:03:59 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:50.441 19:03:59 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:50.441 19:03:59 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:50.441 19:03:59 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:50.441 19:03:59 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:50.441 19:03:59 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:50.441 19:03:59 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:50.441 19:03:59 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:50.441 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.441 --rc genhtml_branch_coverage=1 00:05:50.441 --rc genhtml_function_coverage=1 00:05:50.441 --rc genhtml_legend=1 00:05:50.441 --rc geninfo_all_blocks=1 00:05:50.441 --rc geninfo_unexecuted_blocks=1 00:05:50.441 00:05:50.441 ' 00:05:50.441 19:03:59 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:50.441 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.441 --rc genhtml_branch_coverage=1 00:05:50.441 --rc genhtml_function_coverage=1 00:05:50.441 --rc genhtml_legend=1 00:05:50.441 --rc geninfo_all_blocks=1 00:05:50.441 --rc geninfo_unexecuted_blocks=1 00:05:50.441 00:05:50.441 ' 00:05:50.441 19:03:59 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:50.441 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.441 --rc genhtml_branch_coverage=1 00:05:50.441 --rc genhtml_function_coverage=1 00:05:50.441 --rc genhtml_legend=1 00:05:50.441 --rc geninfo_all_blocks=1 00:05:50.441 --rc geninfo_unexecuted_blocks=1 00:05:50.441 00:05:50.441 ' 00:05:50.441 19:03:59 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:50.441 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.441 --rc genhtml_branch_coverage=1 00:05:50.441 --rc genhtml_function_coverage=1 00:05:50.441 --rc genhtml_legend=1 00:05:50.441 --rc geninfo_all_blocks=1 00:05:50.441 --rc geninfo_unexecuted_blocks=1 00:05:50.441 00:05:50.441 ' 00:05:50.441 19:03:59 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:50.441 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:50.441 19:03:59 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57827 00:05:50.441 19:03:59 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:50.441 19:03:59 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57827 00:05:50.441 19:03:59 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 57827 ']' 00:05:50.441 19:03:59 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:50.441 19:03:59 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:50.441 19:03:59 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:50.442 19:03:59 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:50.442 19:03:59 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:50.701 [2024-11-27 19:04:00.096651] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:05:50.701 [2024-11-27 19:04:00.096935] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57827 ] 00:05:50.701 [2024-11-27 19:04:00.274992] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.960 [2024-11-27 19:04:00.412081] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.897 19:04:01 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:51.897 19:04:01 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:51.897 19:04:01 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:52.156 19:04:01 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57827 00:05:52.156 19:04:01 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 57827 ']' 00:05:52.156 19:04:01 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 57827 00:05:52.156 19:04:01 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:05:52.156 19:04:01 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:52.156 19:04:01 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57827 00:05:52.156 killing process with pid 57827 00:05:52.156 19:04:01 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:52.156 19:04:01 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:52.156 19:04:01 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57827' 00:05:52.156 19:04:01 alias_rpc -- common/autotest_common.sh@973 -- # kill 57827 00:05:52.156 19:04:01 alias_rpc -- common/autotest_common.sh@978 -- # wait 57827 00:05:54.694 ************************************ 00:05:54.694 END TEST alias_rpc 00:05:54.694 ************************************ 00:05:54.694 00:05:54.694 real 0m4.542s 00:05:54.694 user 0m4.369s 00:05:54.694 sys 0m0.746s 00:05:54.694 19:04:04 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:54.694 19:04:04 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:54.955 19:04:04 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:54.955 19:04:04 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:54.955 19:04:04 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:54.955 19:04:04 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:54.955 19:04:04 -- common/autotest_common.sh@10 -- # set +x 00:05:54.955 ************************************ 00:05:54.955 START TEST spdkcli_tcp 00:05:54.955 ************************************ 00:05:54.955 19:04:04 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:54.955 * Looking for test storage... 00:05:54.955 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:05:54.955 19:04:04 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:54.955 19:04:04 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:05:54.955 19:04:04 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:54.955 19:04:04 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:54.955 19:04:04 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:54.955 19:04:04 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:54.955 19:04:04 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:54.955 19:04:04 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:54.955 19:04:04 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:54.955 19:04:04 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:54.955 19:04:04 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:54.955 19:04:04 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:54.955 19:04:04 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:54.955 19:04:04 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:54.955 19:04:04 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:54.955 19:04:04 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:54.955 19:04:04 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:54.955 19:04:04 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:54.955 19:04:04 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:54.955 19:04:04 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:54.955 19:04:04 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:54.955 19:04:04 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:54.955 19:04:04 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:54.955 19:04:04 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:54.955 19:04:04 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:54.955 19:04:04 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:54.955 19:04:04 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:54.955 19:04:04 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:54.955 19:04:04 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:54.955 19:04:04 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:54.955 19:04:04 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:54.955 19:04:04 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:54.955 19:04:04 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:54.955 19:04:04 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:54.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.955 --rc genhtml_branch_coverage=1 00:05:54.955 --rc genhtml_function_coverage=1 00:05:54.955 --rc genhtml_legend=1 00:05:54.955 --rc geninfo_all_blocks=1 00:05:54.955 --rc geninfo_unexecuted_blocks=1 00:05:54.955 00:05:54.955 ' 00:05:54.955 19:04:04 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:54.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.955 --rc genhtml_branch_coverage=1 00:05:54.955 --rc genhtml_function_coverage=1 00:05:54.955 --rc genhtml_legend=1 00:05:54.955 --rc geninfo_all_blocks=1 00:05:54.955 --rc geninfo_unexecuted_blocks=1 00:05:54.955 00:05:54.955 ' 00:05:54.955 19:04:04 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:54.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.955 --rc genhtml_branch_coverage=1 00:05:54.955 --rc genhtml_function_coverage=1 00:05:54.955 --rc genhtml_legend=1 00:05:54.955 --rc geninfo_all_blocks=1 00:05:54.955 --rc geninfo_unexecuted_blocks=1 00:05:54.955 00:05:54.955 ' 00:05:54.955 19:04:04 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:54.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.955 --rc genhtml_branch_coverage=1 00:05:54.955 --rc genhtml_function_coverage=1 00:05:54.956 --rc genhtml_legend=1 00:05:54.956 --rc geninfo_all_blocks=1 00:05:54.956 --rc geninfo_unexecuted_blocks=1 00:05:54.956 00:05:54.956 ' 00:05:54.956 19:04:04 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:05:54.956 19:04:04 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:05:54.956 19:04:04 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:05:54.956 19:04:04 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:54.956 19:04:04 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:54.956 19:04:04 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:54.956 19:04:04 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:54.956 19:04:04 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:54.956 19:04:04 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:55.215 19:04:04 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=57935 00:05:55.215 19:04:04 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 57935 00:05:55.215 19:04:04 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 57935 ']' 00:05:55.215 19:04:04 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:55.215 19:04:04 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:55.215 19:04:04 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:55.215 19:04:04 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:55.215 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:55.215 19:04:04 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:55.215 19:04:04 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:55.215 [2024-11-27 19:04:04.705489] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:05:55.215 [2024-11-27 19:04:04.705615] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57935 ] 00:05:55.475 [2024-11-27 19:04:04.883638] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:55.475 [2024-11-27 19:04:05.015804] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.475 [2024-11-27 19:04:05.015856] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:56.459 19:04:06 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:56.459 19:04:06 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:05:56.459 19:04:06 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=57958 00:05:56.459 19:04:06 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:56.459 19:04:06 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:56.720 [ 00:05:56.720 "bdev_malloc_delete", 00:05:56.720 "bdev_malloc_create", 00:05:56.720 "bdev_null_resize", 00:05:56.720 "bdev_null_delete", 00:05:56.720 "bdev_null_create", 00:05:56.720 "bdev_nvme_cuse_unregister", 00:05:56.720 "bdev_nvme_cuse_register", 00:05:56.720 "bdev_opal_new_user", 00:05:56.720 "bdev_opal_set_lock_state", 00:05:56.720 "bdev_opal_delete", 00:05:56.720 "bdev_opal_get_info", 00:05:56.720 "bdev_opal_create", 00:05:56.720 "bdev_nvme_opal_revert", 00:05:56.720 "bdev_nvme_opal_init", 00:05:56.720 "bdev_nvme_send_cmd", 00:05:56.720 "bdev_nvme_set_keys", 00:05:56.720 "bdev_nvme_get_path_iostat", 00:05:56.720 "bdev_nvme_get_mdns_discovery_info", 00:05:56.720 "bdev_nvme_stop_mdns_discovery", 00:05:56.720 "bdev_nvme_start_mdns_discovery", 00:05:56.720 "bdev_nvme_set_multipath_policy", 00:05:56.720 "bdev_nvme_set_preferred_path", 00:05:56.720 "bdev_nvme_get_io_paths", 00:05:56.720 "bdev_nvme_remove_error_injection", 00:05:56.720 "bdev_nvme_add_error_injection", 00:05:56.720 "bdev_nvme_get_discovery_info", 00:05:56.720 "bdev_nvme_stop_discovery", 00:05:56.720 "bdev_nvme_start_discovery", 00:05:56.720 "bdev_nvme_get_controller_health_info", 00:05:56.720 "bdev_nvme_disable_controller", 00:05:56.720 "bdev_nvme_enable_controller", 00:05:56.720 "bdev_nvme_reset_controller", 00:05:56.720 "bdev_nvme_get_transport_statistics", 00:05:56.720 "bdev_nvme_apply_firmware", 00:05:56.720 "bdev_nvme_detach_controller", 00:05:56.720 "bdev_nvme_get_controllers", 00:05:56.720 "bdev_nvme_attach_controller", 00:05:56.720 "bdev_nvme_set_hotplug", 00:05:56.720 "bdev_nvme_set_options", 00:05:56.720 "bdev_passthru_delete", 00:05:56.720 "bdev_passthru_create", 00:05:56.720 "bdev_lvol_set_parent_bdev", 00:05:56.720 "bdev_lvol_set_parent", 00:05:56.720 "bdev_lvol_check_shallow_copy", 00:05:56.720 "bdev_lvol_start_shallow_copy", 00:05:56.720 "bdev_lvol_grow_lvstore", 00:05:56.720 "bdev_lvol_get_lvols", 00:05:56.720 "bdev_lvol_get_lvstores", 00:05:56.720 "bdev_lvol_delete", 00:05:56.720 "bdev_lvol_set_read_only", 00:05:56.720 "bdev_lvol_resize", 00:05:56.720 "bdev_lvol_decouple_parent", 00:05:56.720 "bdev_lvol_inflate", 00:05:56.720 "bdev_lvol_rename", 00:05:56.720 "bdev_lvol_clone_bdev", 00:05:56.720 "bdev_lvol_clone", 00:05:56.720 "bdev_lvol_snapshot", 00:05:56.720 "bdev_lvol_create", 00:05:56.720 "bdev_lvol_delete_lvstore", 00:05:56.720 "bdev_lvol_rename_lvstore", 00:05:56.720 "bdev_lvol_create_lvstore", 00:05:56.720 "bdev_raid_set_options", 00:05:56.720 "bdev_raid_remove_base_bdev", 00:05:56.720 "bdev_raid_add_base_bdev", 00:05:56.720 "bdev_raid_delete", 00:05:56.720 "bdev_raid_create", 00:05:56.720 "bdev_raid_get_bdevs", 00:05:56.720 "bdev_error_inject_error", 00:05:56.720 "bdev_error_delete", 00:05:56.720 "bdev_error_create", 00:05:56.720 "bdev_split_delete", 00:05:56.720 "bdev_split_create", 00:05:56.720 "bdev_delay_delete", 00:05:56.720 "bdev_delay_create", 00:05:56.720 "bdev_delay_update_latency", 00:05:56.720 "bdev_zone_block_delete", 00:05:56.720 "bdev_zone_block_create", 00:05:56.720 "blobfs_create", 00:05:56.720 "blobfs_detect", 00:05:56.720 "blobfs_set_cache_size", 00:05:56.720 "bdev_aio_delete", 00:05:56.720 "bdev_aio_rescan", 00:05:56.720 "bdev_aio_create", 00:05:56.720 "bdev_ftl_set_property", 00:05:56.720 "bdev_ftl_get_properties", 00:05:56.720 "bdev_ftl_get_stats", 00:05:56.720 "bdev_ftl_unmap", 00:05:56.720 "bdev_ftl_unload", 00:05:56.720 "bdev_ftl_delete", 00:05:56.720 "bdev_ftl_load", 00:05:56.720 "bdev_ftl_create", 00:05:56.720 "bdev_virtio_attach_controller", 00:05:56.720 "bdev_virtio_scsi_get_devices", 00:05:56.720 "bdev_virtio_detach_controller", 00:05:56.720 "bdev_virtio_blk_set_hotplug", 00:05:56.720 "bdev_iscsi_delete", 00:05:56.720 "bdev_iscsi_create", 00:05:56.720 "bdev_iscsi_set_options", 00:05:56.720 "accel_error_inject_error", 00:05:56.720 "ioat_scan_accel_module", 00:05:56.720 "dsa_scan_accel_module", 00:05:56.720 "iaa_scan_accel_module", 00:05:56.720 "keyring_file_remove_key", 00:05:56.720 "keyring_file_add_key", 00:05:56.720 "keyring_linux_set_options", 00:05:56.720 "fsdev_aio_delete", 00:05:56.720 "fsdev_aio_create", 00:05:56.720 "iscsi_get_histogram", 00:05:56.720 "iscsi_enable_histogram", 00:05:56.720 "iscsi_set_options", 00:05:56.720 "iscsi_get_auth_groups", 00:05:56.720 "iscsi_auth_group_remove_secret", 00:05:56.720 "iscsi_auth_group_add_secret", 00:05:56.720 "iscsi_delete_auth_group", 00:05:56.720 "iscsi_create_auth_group", 00:05:56.720 "iscsi_set_discovery_auth", 00:05:56.720 "iscsi_get_options", 00:05:56.720 "iscsi_target_node_request_logout", 00:05:56.720 "iscsi_target_node_set_redirect", 00:05:56.720 "iscsi_target_node_set_auth", 00:05:56.720 "iscsi_target_node_add_lun", 00:05:56.720 "iscsi_get_stats", 00:05:56.720 "iscsi_get_connections", 00:05:56.720 "iscsi_portal_group_set_auth", 00:05:56.720 "iscsi_start_portal_group", 00:05:56.720 "iscsi_delete_portal_group", 00:05:56.720 "iscsi_create_portal_group", 00:05:56.720 "iscsi_get_portal_groups", 00:05:56.720 "iscsi_delete_target_node", 00:05:56.720 "iscsi_target_node_remove_pg_ig_maps", 00:05:56.720 "iscsi_target_node_add_pg_ig_maps", 00:05:56.720 "iscsi_create_target_node", 00:05:56.720 "iscsi_get_target_nodes", 00:05:56.720 "iscsi_delete_initiator_group", 00:05:56.720 "iscsi_initiator_group_remove_initiators", 00:05:56.720 "iscsi_initiator_group_add_initiators", 00:05:56.720 "iscsi_create_initiator_group", 00:05:56.720 "iscsi_get_initiator_groups", 00:05:56.720 "nvmf_set_crdt", 00:05:56.720 "nvmf_set_config", 00:05:56.720 "nvmf_set_max_subsystems", 00:05:56.720 "nvmf_stop_mdns_prr", 00:05:56.720 "nvmf_publish_mdns_prr", 00:05:56.720 "nvmf_subsystem_get_listeners", 00:05:56.720 "nvmf_subsystem_get_qpairs", 00:05:56.720 "nvmf_subsystem_get_controllers", 00:05:56.720 "nvmf_get_stats", 00:05:56.720 "nvmf_get_transports", 00:05:56.720 "nvmf_create_transport", 00:05:56.720 "nvmf_get_targets", 00:05:56.720 "nvmf_delete_target", 00:05:56.720 "nvmf_create_target", 00:05:56.720 "nvmf_subsystem_allow_any_host", 00:05:56.720 "nvmf_subsystem_set_keys", 00:05:56.720 "nvmf_subsystem_remove_host", 00:05:56.720 "nvmf_subsystem_add_host", 00:05:56.721 "nvmf_ns_remove_host", 00:05:56.721 "nvmf_ns_add_host", 00:05:56.721 "nvmf_subsystem_remove_ns", 00:05:56.721 "nvmf_subsystem_set_ns_ana_group", 00:05:56.721 "nvmf_subsystem_add_ns", 00:05:56.721 "nvmf_subsystem_listener_set_ana_state", 00:05:56.721 "nvmf_discovery_get_referrals", 00:05:56.721 "nvmf_discovery_remove_referral", 00:05:56.721 "nvmf_discovery_add_referral", 00:05:56.721 "nvmf_subsystem_remove_listener", 00:05:56.721 "nvmf_subsystem_add_listener", 00:05:56.721 "nvmf_delete_subsystem", 00:05:56.721 "nvmf_create_subsystem", 00:05:56.721 "nvmf_get_subsystems", 00:05:56.721 "env_dpdk_get_mem_stats", 00:05:56.721 "nbd_get_disks", 00:05:56.721 "nbd_stop_disk", 00:05:56.721 "nbd_start_disk", 00:05:56.721 "ublk_recover_disk", 00:05:56.721 "ublk_get_disks", 00:05:56.721 "ublk_stop_disk", 00:05:56.721 "ublk_start_disk", 00:05:56.721 "ublk_destroy_target", 00:05:56.721 "ublk_create_target", 00:05:56.721 "virtio_blk_create_transport", 00:05:56.721 "virtio_blk_get_transports", 00:05:56.721 "vhost_controller_set_coalescing", 00:05:56.721 "vhost_get_controllers", 00:05:56.721 "vhost_delete_controller", 00:05:56.721 "vhost_create_blk_controller", 00:05:56.721 "vhost_scsi_controller_remove_target", 00:05:56.721 "vhost_scsi_controller_add_target", 00:05:56.721 "vhost_start_scsi_controller", 00:05:56.721 "vhost_create_scsi_controller", 00:05:56.721 "thread_set_cpumask", 00:05:56.721 "scheduler_set_options", 00:05:56.721 "framework_get_governor", 00:05:56.721 "framework_get_scheduler", 00:05:56.721 "framework_set_scheduler", 00:05:56.721 "framework_get_reactors", 00:05:56.721 "thread_get_io_channels", 00:05:56.721 "thread_get_pollers", 00:05:56.721 "thread_get_stats", 00:05:56.721 "framework_monitor_context_switch", 00:05:56.721 "spdk_kill_instance", 00:05:56.721 "log_enable_timestamps", 00:05:56.721 "log_get_flags", 00:05:56.721 "log_clear_flag", 00:05:56.721 "log_set_flag", 00:05:56.721 "log_get_level", 00:05:56.721 "log_set_level", 00:05:56.721 "log_get_print_level", 00:05:56.721 "log_set_print_level", 00:05:56.721 "framework_enable_cpumask_locks", 00:05:56.721 "framework_disable_cpumask_locks", 00:05:56.721 "framework_wait_init", 00:05:56.721 "framework_start_init", 00:05:56.721 "scsi_get_devices", 00:05:56.721 "bdev_get_histogram", 00:05:56.721 "bdev_enable_histogram", 00:05:56.721 "bdev_set_qos_limit", 00:05:56.721 "bdev_set_qd_sampling_period", 00:05:56.721 "bdev_get_bdevs", 00:05:56.721 "bdev_reset_iostat", 00:05:56.721 "bdev_get_iostat", 00:05:56.721 "bdev_examine", 00:05:56.721 "bdev_wait_for_examine", 00:05:56.721 "bdev_set_options", 00:05:56.721 "accel_get_stats", 00:05:56.721 "accel_set_options", 00:05:56.721 "accel_set_driver", 00:05:56.721 "accel_crypto_key_destroy", 00:05:56.721 "accel_crypto_keys_get", 00:05:56.721 "accel_crypto_key_create", 00:05:56.721 "accel_assign_opc", 00:05:56.721 "accel_get_module_info", 00:05:56.721 "accel_get_opc_assignments", 00:05:56.721 "vmd_rescan", 00:05:56.721 "vmd_remove_device", 00:05:56.721 "vmd_enable", 00:05:56.721 "sock_get_default_impl", 00:05:56.721 "sock_set_default_impl", 00:05:56.721 "sock_impl_set_options", 00:05:56.721 "sock_impl_get_options", 00:05:56.721 "iobuf_get_stats", 00:05:56.721 "iobuf_set_options", 00:05:56.721 "keyring_get_keys", 00:05:56.721 "framework_get_pci_devices", 00:05:56.721 "framework_get_config", 00:05:56.721 "framework_get_subsystems", 00:05:56.721 "fsdev_set_opts", 00:05:56.721 "fsdev_get_opts", 00:05:56.721 "trace_get_info", 00:05:56.721 "trace_get_tpoint_group_mask", 00:05:56.721 "trace_disable_tpoint_group", 00:05:56.721 "trace_enable_tpoint_group", 00:05:56.721 "trace_clear_tpoint_mask", 00:05:56.721 "trace_set_tpoint_mask", 00:05:56.721 "notify_get_notifications", 00:05:56.721 "notify_get_types", 00:05:56.721 "spdk_get_version", 00:05:56.721 "rpc_get_methods" 00:05:56.721 ] 00:05:56.721 19:04:06 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:56.721 19:04:06 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:56.721 19:04:06 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:56.721 19:04:06 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:56.721 19:04:06 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 57935 00:05:56.721 19:04:06 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 57935 ']' 00:05:56.721 19:04:06 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 57935 00:05:56.721 19:04:06 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:05:56.721 19:04:06 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:56.721 19:04:06 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57935 00:05:56.721 killing process with pid 57935 00:05:56.721 19:04:06 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:56.721 19:04:06 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:56.721 19:04:06 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57935' 00:05:56.721 19:04:06 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 57935 00:05:56.721 19:04:06 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 57935 00:05:59.263 ************************************ 00:05:59.263 END TEST spdkcli_tcp 00:05:59.263 ************************************ 00:05:59.263 00:05:59.263 real 0m4.496s 00:05:59.263 user 0m7.827s 00:05:59.263 sys 0m0.836s 00:05:59.263 19:04:08 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:59.263 19:04:08 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:59.524 19:04:08 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:59.524 19:04:08 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:59.524 19:04:08 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:59.524 19:04:08 -- common/autotest_common.sh@10 -- # set +x 00:05:59.524 ************************************ 00:05:59.524 START TEST dpdk_mem_utility 00:05:59.524 ************************************ 00:05:59.524 19:04:08 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:59.524 * Looking for test storage... 00:05:59.524 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:59.524 19:04:09 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:59.524 19:04:09 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:05:59.524 19:04:09 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:59.524 19:04:09 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:59.524 19:04:09 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:59.524 19:04:09 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:59.524 19:04:09 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:59.524 19:04:09 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:59.524 19:04:09 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:59.524 19:04:09 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:59.524 19:04:09 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:59.524 19:04:09 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:59.524 19:04:09 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:59.524 19:04:09 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:59.524 19:04:09 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:59.524 19:04:09 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:59.524 19:04:09 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:59.524 19:04:09 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:59.524 19:04:09 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:59.524 19:04:09 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:59.524 19:04:09 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:59.524 19:04:09 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:59.524 19:04:09 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:59.524 19:04:09 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:59.524 19:04:09 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:59.524 19:04:09 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:59.524 19:04:09 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:59.524 19:04:09 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:59.524 19:04:09 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:59.524 19:04:09 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:59.524 19:04:09 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:59.524 19:04:09 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:59.524 19:04:09 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:59.524 19:04:09 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:59.524 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.524 --rc genhtml_branch_coverage=1 00:05:59.524 --rc genhtml_function_coverage=1 00:05:59.524 --rc genhtml_legend=1 00:05:59.524 --rc geninfo_all_blocks=1 00:05:59.524 --rc geninfo_unexecuted_blocks=1 00:05:59.524 00:05:59.524 ' 00:05:59.524 19:04:09 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:59.524 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.524 --rc genhtml_branch_coverage=1 00:05:59.524 --rc genhtml_function_coverage=1 00:05:59.524 --rc genhtml_legend=1 00:05:59.524 --rc geninfo_all_blocks=1 00:05:59.524 --rc geninfo_unexecuted_blocks=1 00:05:59.524 00:05:59.524 ' 00:05:59.524 19:04:09 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:59.524 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.524 --rc genhtml_branch_coverage=1 00:05:59.524 --rc genhtml_function_coverage=1 00:05:59.524 --rc genhtml_legend=1 00:05:59.524 --rc geninfo_all_blocks=1 00:05:59.524 --rc geninfo_unexecuted_blocks=1 00:05:59.524 00:05:59.524 ' 00:05:59.524 19:04:09 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:59.524 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.524 --rc genhtml_branch_coverage=1 00:05:59.524 --rc genhtml_function_coverage=1 00:05:59.524 --rc genhtml_legend=1 00:05:59.524 --rc geninfo_all_blocks=1 00:05:59.524 --rc geninfo_unexecuted_blocks=1 00:05:59.524 00:05:59.524 ' 00:05:59.524 19:04:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:59.524 19:04:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=58063 00:05:59.524 19:04:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:59.524 19:04:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 58063 00:05:59.524 19:04:09 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 58063 ']' 00:05:59.524 19:04:09 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:59.524 19:04:09 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:59.524 19:04:09 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:59.524 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:59.524 19:04:09 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:59.524 19:04:09 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:59.784 [2024-11-27 19:04:09.255735] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:05:59.784 [2024-11-27 19:04:09.255877] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58063 ] 00:06:00.042 [2024-11-27 19:04:09.435804] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.042 [2024-11-27 19:04:09.575906] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.979 19:04:10 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:00.979 19:04:10 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:06:00.979 19:04:10 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:00.979 19:04:10 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:00.980 19:04:10 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:00.980 19:04:10 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:00.980 { 00:06:00.980 "filename": "/tmp/spdk_mem_dump.txt" 00:06:00.980 } 00:06:00.980 19:04:10 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:00.980 19:04:10 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:01.241 DPDK memory size 824.000000 MiB in 1 heap(s) 00:06:01.241 1 heaps totaling size 824.000000 MiB 00:06:01.241 size: 824.000000 MiB heap id: 0 00:06:01.241 end heaps---------- 00:06:01.241 9 mempools totaling size 603.782043 MiB 00:06:01.241 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:01.241 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:01.241 size: 100.555481 MiB name: bdev_io_58063 00:06:01.241 size: 50.003479 MiB name: msgpool_58063 00:06:01.241 size: 36.509338 MiB name: fsdev_io_58063 00:06:01.241 size: 21.763794 MiB name: PDU_Pool 00:06:01.241 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:01.241 size: 4.133484 MiB name: evtpool_58063 00:06:01.241 size: 0.026123 MiB name: Session_Pool 00:06:01.241 end mempools------- 00:06:01.241 6 memzones totaling size 4.142822 MiB 00:06:01.241 size: 1.000366 MiB name: RG_ring_0_58063 00:06:01.241 size: 1.000366 MiB name: RG_ring_1_58063 00:06:01.241 size: 1.000366 MiB name: RG_ring_4_58063 00:06:01.241 size: 1.000366 MiB name: RG_ring_5_58063 00:06:01.241 size: 0.125366 MiB name: RG_ring_2_58063 00:06:01.241 size: 0.015991 MiB name: RG_ring_3_58063 00:06:01.241 end memzones------- 00:06:01.241 19:04:10 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:06:01.241 heap id: 0 total size: 824.000000 MiB number of busy elements: 323 number of free elements: 18 00:06:01.241 list of free elements. size: 16.779419 MiB 00:06:01.241 element at address: 0x200006400000 with size: 1.995972 MiB 00:06:01.241 element at address: 0x20000a600000 with size: 1.995972 MiB 00:06:01.241 element at address: 0x200003e00000 with size: 1.991028 MiB 00:06:01.241 element at address: 0x200019500040 with size: 0.999939 MiB 00:06:01.241 element at address: 0x200019900040 with size: 0.999939 MiB 00:06:01.241 element at address: 0x200019a00000 with size: 0.999084 MiB 00:06:01.241 element at address: 0x200032600000 with size: 0.994324 MiB 00:06:01.241 element at address: 0x200000400000 with size: 0.992004 MiB 00:06:01.241 element at address: 0x200019200000 with size: 0.959656 MiB 00:06:01.241 element at address: 0x200019d00040 with size: 0.936401 MiB 00:06:01.241 element at address: 0x200000200000 with size: 0.716980 MiB 00:06:01.241 element at address: 0x20001b400000 with size: 0.560730 MiB 00:06:01.241 element at address: 0x200000c00000 with size: 0.489197 MiB 00:06:01.241 element at address: 0x200019600000 with size: 0.487976 MiB 00:06:01.241 element at address: 0x200019e00000 with size: 0.485413 MiB 00:06:01.241 element at address: 0x200012c00000 with size: 0.433472 MiB 00:06:01.241 element at address: 0x200028800000 with size: 0.390442 MiB 00:06:01.241 element at address: 0x200000800000 with size: 0.350891 MiB 00:06:01.241 list of standard malloc elements. size: 199.289673 MiB 00:06:01.241 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:06:01.241 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:06:01.241 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:06:01.241 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:06:01.241 element at address: 0x200019bfff80 with size: 1.000183 MiB 00:06:01.241 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:06:01.241 element at address: 0x200019deff40 with size: 0.062683 MiB 00:06:01.241 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:06:01.241 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:06:01.241 element at address: 0x200019defdc0 with size: 0.000366 MiB 00:06:01.241 element at address: 0x200012bff040 with size: 0.000305 MiB 00:06:01.241 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:06:01.241 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:06:01.241 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:06:01.241 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:06:01.241 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:06:01.241 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:06:01.241 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:06:01.241 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:06:01.241 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:06:01.241 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:06:01.241 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:06:01.241 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:06:01.241 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:06:01.241 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:06:01.241 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:06:01.241 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:06:01.241 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:06:01.241 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:06:01.241 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:06:01.241 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:06:01.241 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:06:01.241 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:06:01.241 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:06:01.241 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:06:01.241 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:06:01.241 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:06:01.241 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:06:01.241 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:06:01.241 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:06:01.241 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:06:01.241 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:06:01.241 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:06:01.241 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:06:01.241 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:06:01.241 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:06:01.241 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:06:01.241 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:06:01.242 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:06:01.242 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:06:01.242 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:06:01.242 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:06:01.242 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:06:01.242 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:06:01.242 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:06:01.242 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:06:01.242 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:06:01.242 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:06:01.242 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:06:01.242 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:06:01.242 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:06:01.242 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:06:01.242 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:06:01.242 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:06:01.242 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:06:01.242 element at address: 0x200000c7d3c0 with size: 0.000244 MiB 00:06:01.242 element at address: 0x200000c7d4c0 with size: 0.000244 MiB 00:06:01.242 element at address: 0x200000c7d5c0 with size: 0.000244 MiB 00:06:01.242 element at address: 0x200000c7d6c0 with size: 0.000244 MiB 00:06:01.242 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:06:01.242 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:06:01.242 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:06:01.242 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:06:01.242 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:06:01.242 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:06:01.242 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:06:01.242 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:06:01.242 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:06:01.242 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:06:01.242 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:06:01.242 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:06:01.242 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:06:01.242 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:06:01.242 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:06:01.242 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:06:01.242 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:06:01.242 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:06:01.242 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:06:01.242 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:06:01.242 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:06:01.242 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:06:01.242 element at address: 0x200000cff000 with size: 0.000244 MiB 00:06:01.242 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:06:01.242 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:06:01.242 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:06:01.242 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:06:01.242 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:06:01.242 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:06:01.242 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:06:01.242 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:06:01.242 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:06:01.242 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:06:01.242 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:06:01.242 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:06:01.242 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:06:01.242 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:06:01.242 element at address: 0x200012bff180 with size: 0.000244 MiB 00:06:01.242 element at address: 0x200012bff280 with size: 0.000244 MiB 00:06:01.242 element at address: 0x200012bff380 with size: 0.000244 MiB 00:06:01.242 element at address: 0x200012bff480 with size: 0.000244 MiB 00:06:01.242 element at address: 0x200012bff580 with size: 0.000244 MiB 00:06:01.242 element at address: 0x200012bff680 with size: 0.000244 MiB 00:06:01.242 element at address: 0x200012bff780 with size: 0.000244 MiB 00:06:01.242 element at address: 0x200012bff880 with size: 0.000244 MiB 00:06:01.242 element at address: 0x200012bff980 with size: 0.000244 MiB 00:06:01.242 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:06:01.242 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:06:01.242 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:06:01.242 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:06:01.242 element at address: 0x200012c6ef80 with size: 0.000244 MiB 00:06:01.242 element at address: 0x200012c6f080 with size: 0.000244 MiB 00:06:01.242 element at address: 0x200012c6f180 with size: 0.000244 MiB 00:06:01.242 element at address: 0x200012c6f280 with size: 0.000244 MiB 00:06:01.242 element at address: 0x200012c6f380 with size: 0.000244 MiB 00:06:01.242 element at address: 0x200012c6f480 with size: 0.000244 MiB 00:06:01.242 element at address: 0x200012c6f580 with size: 0.000244 MiB 00:06:01.242 element at address: 0x200012c6f680 with size: 0.000244 MiB 00:06:01.242 element at address: 0x200012c6f780 with size: 0.000244 MiB 00:06:01.242 element at address: 0x200012c6f880 with size: 0.000244 MiB 00:06:01.242 element at address: 0x200012cefbc0 with size: 0.000244 MiB 00:06:01.242 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:06:01.242 element at address: 0x20001967cec0 with size: 0.000244 MiB 00:06:01.242 element at address: 0x20001967cfc0 with size: 0.000244 MiB 00:06:01.242 element at address: 0x20001967d0c0 with size: 0.000244 MiB 00:06:01.242 element at address: 0x20001967d1c0 with size: 0.000244 MiB 00:06:01.242 element at address: 0x20001967d2c0 with size: 0.000244 MiB 00:06:01.242 element at address: 0x20001967d3c0 with size: 0.000244 MiB 00:06:01.242 element at address: 0x20001967d4c0 with size: 0.000244 MiB 00:06:01.242 element at address: 0x20001967d5c0 with size: 0.000244 MiB 00:06:01.242 element at address: 0x20001967d6c0 with size: 0.000244 MiB 00:06:01.242 element at address: 0x20001967d7c0 with size: 0.000244 MiB 00:06:01.242 element at address: 0x20001967d8c0 with size: 0.000244 MiB 00:06:01.242 element at address: 0x20001967d9c0 with size: 0.000244 MiB 00:06:01.242 element at address: 0x2000196fdd00 with size: 0.000244 MiB 00:06:01.242 element at address: 0x200019affc40 with size: 0.000244 MiB 00:06:01.242 element at address: 0x200019defbc0 with size: 0.000244 MiB 00:06:01.242 element at address: 0x200019defcc0 with size: 0.000244 MiB 00:06:01.242 element at address: 0x200019ebc680 with size: 0.000244 MiB 00:06:01.242 element at address: 0x20001b48f8c0 with size: 0.000244 MiB 00:06:01.242 element at address: 0x20001b48f9c0 with size: 0.000244 MiB 00:06:01.242 element at address: 0x20001b48fac0 with size: 0.000244 MiB 00:06:01.242 element at address: 0x20001b48fbc0 with size: 0.000244 MiB 00:06:01.242 element at address: 0x20001b48fcc0 with size: 0.000244 MiB 00:06:01.242 element at address: 0x20001b48fdc0 with size: 0.000244 MiB 00:06:01.242 element at address: 0x20001b48fec0 with size: 0.000244 MiB 00:06:01.242 element at address: 0x20001b48ffc0 with size: 0.000244 MiB 00:06:01.242 element at address: 0x20001b4900c0 with size: 0.000244 MiB 00:06:01.242 element at address: 0x20001b4901c0 with size: 0.000244 MiB 00:06:01.242 element at address: 0x20001b4902c0 with size: 0.000244 MiB 00:06:01.242 element at address: 0x20001b4903c0 with size: 0.000244 MiB 00:06:01.242 element at address: 0x20001b4904c0 with size: 0.000244 MiB 00:06:01.242 element at address: 0x20001b4905c0 with size: 0.000244 MiB 00:06:01.242 element at address: 0x20001b4906c0 with size: 0.000244 MiB 00:06:01.242 element at address: 0x20001b4907c0 with size: 0.000244 MiB 00:06:01.242 element at address: 0x20001b4908c0 with size: 0.000244 MiB 00:06:01.242 element at address: 0x20001b4909c0 with size: 0.000244 MiB 00:06:01.242 element at address: 0x20001b490ac0 with size: 0.000244 MiB 00:06:01.242 element at address: 0x20001b490bc0 with size: 0.000244 MiB 00:06:01.242 element at address: 0x20001b490cc0 with size: 0.000244 MiB 00:06:01.242 element at address: 0x20001b490dc0 with size: 0.000244 MiB 00:06:01.242 element at address: 0x20001b490ec0 with size: 0.000244 MiB 00:06:01.242 element at address: 0x20001b490fc0 with size: 0.000244 MiB 00:06:01.242 element at address: 0x20001b4910c0 with size: 0.000244 MiB 00:06:01.242 element at address: 0x20001b4911c0 with size: 0.000244 MiB 00:06:01.242 element at address: 0x20001b4912c0 with size: 0.000244 MiB 00:06:01.242 element at address: 0x20001b4913c0 with size: 0.000244 MiB 00:06:01.242 element at address: 0x20001b4914c0 with size: 0.000244 MiB 00:06:01.242 element at address: 0x20001b4915c0 with size: 0.000244 MiB 00:06:01.242 element at address: 0x20001b4916c0 with size: 0.000244 MiB 00:06:01.242 element at address: 0x20001b4917c0 with size: 0.000244 MiB 00:06:01.242 element at address: 0x20001b4918c0 with size: 0.000244 MiB 00:06:01.242 element at address: 0x20001b4919c0 with size: 0.000244 MiB 00:06:01.242 element at address: 0x20001b491ac0 with size: 0.000244 MiB 00:06:01.242 element at address: 0x20001b491bc0 with size: 0.000244 MiB 00:06:01.242 element at address: 0x20001b491cc0 with size: 0.000244 MiB 00:06:01.242 element at address: 0x20001b491dc0 with size: 0.000244 MiB 00:06:01.242 element at address: 0x20001b491ec0 with size: 0.000244 MiB 00:06:01.242 element at address: 0x20001b491fc0 with size: 0.000244 MiB 00:06:01.242 element at address: 0x20001b4920c0 with size: 0.000244 MiB 00:06:01.242 element at address: 0x20001b4921c0 with size: 0.000244 MiB 00:06:01.242 element at address: 0x20001b4922c0 with size: 0.000244 MiB 00:06:01.242 element at address: 0x20001b4923c0 with size: 0.000244 MiB 00:06:01.242 element at address: 0x20001b4924c0 with size: 0.000244 MiB 00:06:01.242 element at address: 0x20001b4925c0 with size: 0.000244 MiB 00:06:01.242 element at address: 0x20001b4926c0 with size: 0.000244 MiB 00:06:01.242 element at address: 0x20001b4927c0 with size: 0.000244 MiB 00:06:01.242 element at address: 0x20001b4928c0 with size: 0.000244 MiB 00:06:01.242 element at address: 0x20001b4929c0 with size: 0.000244 MiB 00:06:01.242 element at address: 0x20001b492ac0 with size: 0.000244 MiB 00:06:01.242 element at address: 0x20001b492bc0 with size: 0.000244 MiB 00:06:01.242 element at address: 0x20001b492cc0 with size: 0.000244 MiB 00:06:01.242 element at address: 0x20001b492dc0 with size: 0.000244 MiB 00:06:01.243 element at address: 0x20001b492ec0 with size: 0.000244 MiB 00:06:01.243 element at address: 0x20001b492fc0 with size: 0.000244 MiB 00:06:01.243 element at address: 0x20001b4930c0 with size: 0.000244 MiB 00:06:01.243 element at address: 0x20001b4931c0 with size: 0.000244 MiB 00:06:01.243 element at address: 0x20001b4932c0 with size: 0.000244 MiB 00:06:01.243 element at address: 0x20001b4933c0 with size: 0.000244 MiB 00:06:01.243 element at address: 0x20001b4934c0 with size: 0.000244 MiB 00:06:01.243 element at address: 0x20001b4935c0 with size: 0.000244 MiB 00:06:01.243 element at address: 0x20001b4936c0 with size: 0.000244 MiB 00:06:01.243 element at address: 0x20001b4937c0 with size: 0.000244 MiB 00:06:01.243 element at address: 0x20001b4938c0 with size: 0.000244 MiB 00:06:01.243 element at address: 0x20001b4939c0 with size: 0.000244 MiB 00:06:01.243 element at address: 0x20001b493ac0 with size: 0.000244 MiB 00:06:01.243 element at address: 0x20001b493bc0 with size: 0.000244 MiB 00:06:01.243 element at address: 0x20001b493cc0 with size: 0.000244 MiB 00:06:01.243 element at address: 0x20001b493dc0 with size: 0.000244 MiB 00:06:01.243 element at address: 0x20001b493ec0 with size: 0.000244 MiB 00:06:01.243 element at address: 0x20001b493fc0 with size: 0.000244 MiB 00:06:01.243 element at address: 0x20001b4940c0 with size: 0.000244 MiB 00:06:01.243 element at address: 0x20001b4941c0 with size: 0.000244 MiB 00:06:01.243 element at address: 0x20001b4942c0 with size: 0.000244 MiB 00:06:01.243 element at address: 0x20001b4943c0 with size: 0.000244 MiB 00:06:01.243 element at address: 0x20001b4944c0 with size: 0.000244 MiB 00:06:01.243 element at address: 0x20001b4945c0 with size: 0.000244 MiB 00:06:01.243 element at address: 0x20001b4946c0 with size: 0.000244 MiB 00:06:01.243 element at address: 0x20001b4947c0 with size: 0.000244 MiB 00:06:01.243 element at address: 0x20001b4948c0 with size: 0.000244 MiB 00:06:01.243 element at address: 0x20001b4949c0 with size: 0.000244 MiB 00:06:01.243 element at address: 0x20001b494ac0 with size: 0.000244 MiB 00:06:01.243 element at address: 0x20001b494bc0 with size: 0.000244 MiB 00:06:01.243 element at address: 0x20001b494cc0 with size: 0.000244 MiB 00:06:01.243 element at address: 0x20001b494dc0 with size: 0.000244 MiB 00:06:01.243 element at address: 0x20001b494ec0 with size: 0.000244 MiB 00:06:01.243 element at address: 0x20001b494fc0 with size: 0.000244 MiB 00:06:01.243 element at address: 0x20001b4950c0 with size: 0.000244 MiB 00:06:01.243 element at address: 0x20001b4951c0 with size: 0.000244 MiB 00:06:01.243 element at address: 0x20001b4952c0 with size: 0.000244 MiB 00:06:01.243 element at address: 0x20001b4953c0 with size: 0.000244 MiB 00:06:01.243 element at address: 0x200028863f40 with size: 0.000244 MiB 00:06:01.243 element at address: 0x200028864040 with size: 0.000244 MiB 00:06:01.243 element at address: 0x20002886ad00 with size: 0.000244 MiB 00:06:01.243 element at address: 0x20002886af80 with size: 0.000244 MiB 00:06:01.243 element at address: 0x20002886b080 with size: 0.000244 MiB 00:06:01.243 element at address: 0x20002886b180 with size: 0.000244 MiB 00:06:01.243 element at address: 0x20002886b280 with size: 0.000244 MiB 00:06:01.243 element at address: 0x20002886b380 with size: 0.000244 MiB 00:06:01.243 element at address: 0x20002886b480 with size: 0.000244 MiB 00:06:01.243 element at address: 0x20002886b580 with size: 0.000244 MiB 00:06:01.243 element at address: 0x20002886b680 with size: 0.000244 MiB 00:06:01.243 element at address: 0x20002886b780 with size: 0.000244 MiB 00:06:01.243 element at address: 0x20002886b880 with size: 0.000244 MiB 00:06:01.243 element at address: 0x20002886b980 with size: 0.000244 MiB 00:06:01.243 element at address: 0x20002886ba80 with size: 0.000244 MiB 00:06:01.243 element at address: 0x20002886bb80 with size: 0.000244 MiB 00:06:01.243 element at address: 0x20002886bc80 with size: 0.000244 MiB 00:06:01.243 element at address: 0x20002886bd80 with size: 0.000244 MiB 00:06:01.243 element at address: 0x20002886be80 with size: 0.000244 MiB 00:06:01.243 element at address: 0x20002886bf80 with size: 0.000244 MiB 00:06:01.243 element at address: 0x20002886c080 with size: 0.000244 MiB 00:06:01.243 element at address: 0x20002886c180 with size: 0.000244 MiB 00:06:01.243 element at address: 0x20002886c280 with size: 0.000244 MiB 00:06:01.243 element at address: 0x20002886c380 with size: 0.000244 MiB 00:06:01.243 element at address: 0x20002886c480 with size: 0.000244 MiB 00:06:01.243 element at address: 0x20002886c580 with size: 0.000244 MiB 00:06:01.243 element at address: 0x20002886c680 with size: 0.000244 MiB 00:06:01.243 element at address: 0x20002886c780 with size: 0.000244 MiB 00:06:01.243 element at address: 0x20002886c880 with size: 0.000244 MiB 00:06:01.243 element at address: 0x20002886c980 with size: 0.000244 MiB 00:06:01.243 element at address: 0x20002886ca80 with size: 0.000244 MiB 00:06:01.243 element at address: 0x20002886cb80 with size: 0.000244 MiB 00:06:01.243 element at address: 0x20002886cc80 with size: 0.000244 MiB 00:06:01.243 element at address: 0x20002886cd80 with size: 0.000244 MiB 00:06:01.243 element at address: 0x20002886ce80 with size: 0.000244 MiB 00:06:01.243 element at address: 0x20002886cf80 with size: 0.000244 MiB 00:06:01.243 element at address: 0x20002886d080 with size: 0.000244 MiB 00:06:01.243 element at address: 0x20002886d180 with size: 0.000244 MiB 00:06:01.243 element at address: 0x20002886d280 with size: 0.000244 MiB 00:06:01.243 element at address: 0x20002886d380 with size: 0.000244 MiB 00:06:01.243 element at address: 0x20002886d480 with size: 0.000244 MiB 00:06:01.243 element at address: 0x20002886d580 with size: 0.000244 MiB 00:06:01.243 element at address: 0x20002886d680 with size: 0.000244 MiB 00:06:01.243 element at address: 0x20002886d780 with size: 0.000244 MiB 00:06:01.243 element at address: 0x20002886d880 with size: 0.000244 MiB 00:06:01.243 element at address: 0x20002886d980 with size: 0.000244 MiB 00:06:01.243 element at address: 0x20002886da80 with size: 0.000244 MiB 00:06:01.243 element at address: 0x20002886db80 with size: 0.000244 MiB 00:06:01.243 element at address: 0x20002886dc80 with size: 0.000244 MiB 00:06:01.243 element at address: 0x20002886dd80 with size: 0.000244 MiB 00:06:01.243 element at address: 0x20002886de80 with size: 0.000244 MiB 00:06:01.243 element at address: 0x20002886df80 with size: 0.000244 MiB 00:06:01.243 element at address: 0x20002886e080 with size: 0.000244 MiB 00:06:01.243 element at address: 0x20002886e180 with size: 0.000244 MiB 00:06:01.243 element at address: 0x20002886e280 with size: 0.000244 MiB 00:06:01.243 element at address: 0x20002886e380 with size: 0.000244 MiB 00:06:01.243 element at address: 0x20002886e480 with size: 0.000244 MiB 00:06:01.243 element at address: 0x20002886e580 with size: 0.000244 MiB 00:06:01.243 element at address: 0x20002886e680 with size: 0.000244 MiB 00:06:01.243 element at address: 0x20002886e780 with size: 0.000244 MiB 00:06:01.243 element at address: 0x20002886e880 with size: 0.000244 MiB 00:06:01.243 element at address: 0x20002886e980 with size: 0.000244 MiB 00:06:01.243 element at address: 0x20002886ea80 with size: 0.000244 MiB 00:06:01.243 element at address: 0x20002886eb80 with size: 0.000244 MiB 00:06:01.243 element at address: 0x20002886ec80 with size: 0.000244 MiB 00:06:01.243 element at address: 0x20002886ed80 with size: 0.000244 MiB 00:06:01.243 element at address: 0x20002886ee80 with size: 0.000244 MiB 00:06:01.243 element at address: 0x20002886ef80 with size: 0.000244 MiB 00:06:01.243 element at address: 0x20002886f080 with size: 0.000244 MiB 00:06:01.243 element at address: 0x20002886f180 with size: 0.000244 MiB 00:06:01.243 element at address: 0x20002886f280 with size: 0.000244 MiB 00:06:01.243 element at address: 0x20002886f380 with size: 0.000244 MiB 00:06:01.243 element at address: 0x20002886f480 with size: 0.000244 MiB 00:06:01.243 element at address: 0x20002886f580 with size: 0.000244 MiB 00:06:01.243 element at address: 0x20002886f680 with size: 0.000244 MiB 00:06:01.243 element at address: 0x20002886f780 with size: 0.000244 MiB 00:06:01.243 element at address: 0x20002886f880 with size: 0.000244 MiB 00:06:01.243 element at address: 0x20002886f980 with size: 0.000244 MiB 00:06:01.243 element at address: 0x20002886fa80 with size: 0.000244 MiB 00:06:01.243 element at address: 0x20002886fb80 with size: 0.000244 MiB 00:06:01.243 element at address: 0x20002886fc80 with size: 0.000244 MiB 00:06:01.243 element at address: 0x20002886fd80 with size: 0.000244 MiB 00:06:01.243 element at address: 0x20002886fe80 with size: 0.000244 MiB 00:06:01.243 list of memzone associated elements. size: 607.930908 MiB 00:06:01.243 element at address: 0x20001b4954c0 with size: 211.416809 MiB 00:06:01.243 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:01.243 element at address: 0x20002886ff80 with size: 157.562622 MiB 00:06:01.243 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:01.243 element at address: 0x200012df1e40 with size: 100.055115 MiB 00:06:01.243 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_58063_0 00:06:01.243 element at address: 0x200000dff340 with size: 48.003113 MiB 00:06:01.243 associated memzone info: size: 48.002930 MiB name: MP_msgpool_58063_0 00:06:01.243 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:06:01.243 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_58063_0 00:06:01.243 element at address: 0x200019fbe900 with size: 20.255615 MiB 00:06:01.243 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:01.243 element at address: 0x2000327feb00 with size: 18.005127 MiB 00:06:01.243 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:01.243 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:06:01.243 associated memzone info: size: 3.000122 MiB name: MP_evtpool_58063_0 00:06:01.243 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:06:01.243 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_58063 00:06:01.243 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:06:01.243 associated memzone info: size: 1.007996 MiB name: MP_evtpool_58063 00:06:01.243 element at address: 0x2000196fde00 with size: 1.008179 MiB 00:06:01.243 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:01.243 element at address: 0x200019ebc780 with size: 1.008179 MiB 00:06:01.243 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:01.243 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:06:01.243 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:01.243 element at address: 0x200012cefcc0 with size: 1.008179 MiB 00:06:01.243 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:01.244 element at address: 0x200000cff100 with size: 1.000549 MiB 00:06:01.244 associated memzone info: size: 1.000366 MiB name: RG_ring_0_58063 00:06:01.244 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:06:01.244 associated memzone info: size: 1.000366 MiB name: RG_ring_1_58063 00:06:01.244 element at address: 0x200019affd40 with size: 1.000549 MiB 00:06:01.244 associated memzone info: size: 1.000366 MiB name: RG_ring_4_58063 00:06:01.244 element at address: 0x2000326fe8c0 with size: 1.000549 MiB 00:06:01.244 associated memzone info: size: 1.000366 MiB name: RG_ring_5_58063 00:06:01.244 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:06:01.244 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_58063 00:06:01.244 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:06:01.244 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_58063 00:06:01.244 element at address: 0x20001967dac0 with size: 0.500549 MiB 00:06:01.244 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:01.244 element at address: 0x200012c6f980 with size: 0.500549 MiB 00:06:01.244 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:01.244 element at address: 0x200019e7c440 with size: 0.250549 MiB 00:06:01.244 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:01.244 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:06:01.244 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_58063 00:06:01.244 element at address: 0x20000085df80 with size: 0.125549 MiB 00:06:01.244 associated memzone info: size: 0.125366 MiB name: RG_ring_2_58063 00:06:01.244 element at address: 0x2000192f5ac0 with size: 0.031799 MiB 00:06:01.244 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:01.244 element at address: 0x200028864140 with size: 0.023804 MiB 00:06:01.244 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:01.244 element at address: 0x200000859d40 with size: 0.016174 MiB 00:06:01.244 associated memzone info: size: 0.015991 MiB name: RG_ring_3_58063 00:06:01.244 element at address: 0x20002886a2c0 with size: 0.002502 MiB 00:06:01.244 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:01.244 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:06:01.244 associated memzone info: size: 0.000183 MiB name: MP_msgpool_58063 00:06:01.244 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:06:01.244 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_58063 00:06:01.244 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:06:01.244 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_58063 00:06:01.244 element at address: 0x20002886ae00 with size: 0.000366 MiB 00:06:01.244 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:01.244 19:04:10 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:01.244 19:04:10 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 58063 00:06:01.244 19:04:10 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 58063 ']' 00:06:01.244 19:04:10 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 58063 00:06:01.244 19:04:10 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:06:01.244 19:04:10 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:01.244 19:04:10 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58063 00:06:01.244 19:04:10 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:01.244 19:04:10 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:01.244 19:04:10 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58063' 00:06:01.244 killing process with pid 58063 00:06:01.244 19:04:10 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 58063 00:06:01.244 19:04:10 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 58063 00:06:03.782 00:06:03.782 real 0m4.360s 00:06:03.782 user 0m4.074s 00:06:03.782 sys 0m0.757s 00:06:03.782 19:04:13 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:03.782 19:04:13 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:03.782 ************************************ 00:06:03.782 END TEST dpdk_mem_utility 00:06:03.782 ************************************ 00:06:03.782 19:04:13 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:03.782 19:04:13 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:03.782 19:04:13 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:03.782 19:04:13 -- common/autotest_common.sh@10 -- # set +x 00:06:03.782 ************************************ 00:06:03.782 START TEST event 00:06:03.782 ************************************ 00:06:03.782 19:04:13 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:04.042 * Looking for test storage... 00:06:04.042 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:04.042 19:04:13 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:04.042 19:04:13 event -- common/autotest_common.sh@1693 -- # lcov --version 00:06:04.042 19:04:13 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:04.042 19:04:13 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:04.042 19:04:13 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:04.042 19:04:13 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:04.042 19:04:13 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:04.042 19:04:13 event -- scripts/common.sh@336 -- # IFS=.-: 00:06:04.042 19:04:13 event -- scripts/common.sh@336 -- # read -ra ver1 00:06:04.042 19:04:13 event -- scripts/common.sh@337 -- # IFS=.-: 00:06:04.042 19:04:13 event -- scripts/common.sh@337 -- # read -ra ver2 00:06:04.042 19:04:13 event -- scripts/common.sh@338 -- # local 'op=<' 00:06:04.042 19:04:13 event -- scripts/common.sh@340 -- # ver1_l=2 00:06:04.042 19:04:13 event -- scripts/common.sh@341 -- # ver2_l=1 00:06:04.042 19:04:13 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:04.042 19:04:13 event -- scripts/common.sh@344 -- # case "$op" in 00:06:04.042 19:04:13 event -- scripts/common.sh@345 -- # : 1 00:06:04.042 19:04:13 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:04.042 19:04:13 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:04.042 19:04:13 event -- scripts/common.sh@365 -- # decimal 1 00:06:04.042 19:04:13 event -- scripts/common.sh@353 -- # local d=1 00:06:04.042 19:04:13 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:04.042 19:04:13 event -- scripts/common.sh@355 -- # echo 1 00:06:04.042 19:04:13 event -- scripts/common.sh@365 -- # ver1[v]=1 00:06:04.042 19:04:13 event -- scripts/common.sh@366 -- # decimal 2 00:06:04.042 19:04:13 event -- scripts/common.sh@353 -- # local d=2 00:06:04.042 19:04:13 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:04.042 19:04:13 event -- scripts/common.sh@355 -- # echo 2 00:06:04.043 19:04:13 event -- scripts/common.sh@366 -- # ver2[v]=2 00:06:04.043 19:04:13 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:04.043 19:04:13 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:04.043 19:04:13 event -- scripts/common.sh@368 -- # return 0 00:06:04.043 19:04:13 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:04.043 19:04:13 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:04.043 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.043 --rc genhtml_branch_coverage=1 00:06:04.043 --rc genhtml_function_coverage=1 00:06:04.043 --rc genhtml_legend=1 00:06:04.043 --rc geninfo_all_blocks=1 00:06:04.043 --rc geninfo_unexecuted_blocks=1 00:06:04.043 00:06:04.043 ' 00:06:04.043 19:04:13 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:04.043 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.043 --rc genhtml_branch_coverage=1 00:06:04.043 --rc genhtml_function_coverage=1 00:06:04.043 --rc genhtml_legend=1 00:06:04.043 --rc geninfo_all_blocks=1 00:06:04.043 --rc geninfo_unexecuted_blocks=1 00:06:04.043 00:06:04.043 ' 00:06:04.043 19:04:13 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:04.043 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.043 --rc genhtml_branch_coverage=1 00:06:04.043 --rc genhtml_function_coverage=1 00:06:04.043 --rc genhtml_legend=1 00:06:04.043 --rc geninfo_all_blocks=1 00:06:04.043 --rc geninfo_unexecuted_blocks=1 00:06:04.043 00:06:04.043 ' 00:06:04.043 19:04:13 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:04.043 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.043 --rc genhtml_branch_coverage=1 00:06:04.043 --rc genhtml_function_coverage=1 00:06:04.043 --rc genhtml_legend=1 00:06:04.043 --rc geninfo_all_blocks=1 00:06:04.043 --rc geninfo_unexecuted_blocks=1 00:06:04.043 00:06:04.043 ' 00:06:04.043 19:04:13 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:04.043 19:04:13 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:04.043 19:04:13 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:04.043 19:04:13 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:06:04.043 19:04:13 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:04.043 19:04:13 event -- common/autotest_common.sh@10 -- # set +x 00:06:04.043 ************************************ 00:06:04.043 START TEST event_perf 00:06:04.043 ************************************ 00:06:04.043 19:04:13 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:04.043 Running I/O for 1 seconds...[2024-11-27 19:04:13.640384] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:06:04.043 [2024-11-27 19:04:13.640549] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58171 ] 00:06:04.302 [2024-11-27 19:04:13.816103] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:04.561 [2024-11-27 19:04:13.955740] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:04.561 Running I/O for 1 seconds...[2024-11-27 19:04:13.955890] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:04.561 [2024-11-27 19:04:13.956037] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.561 [2024-11-27 19:04:13.956076] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:05.938 00:06:05.938 lcore 0: 93389 00:06:05.938 lcore 1: 93392 00:06:05.938 lcore 2: 93388 00:06:05.938 lcore 3: 93392 00:06:05.938 done. 00:06:05.938 00:06:05.938 real 0m1.627s 00:06:05.938 user 0m4.376s 00:06:05.938 sys 0m0.128s 00:06:05.938 19:04:15 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:05.938 19:04:15 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:05.938 ************************************ 00:06:05.938 END TEST event_perf 00:06:05.938 ************************************ 00:06:05.938 19:04:15 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:05.938 19:04:15 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:05.938 19:04:15 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:05.938 19:04:15 event -- common/autotest_common.sh@10 -- # set +x 00:06:05.938 ************************************ 00:06:05.938 START TEST event_reactor 00:06:05.938 ************************************ 00:06:05.938 19:04:15 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:05.938 [2024-11-27 19:04:15.338499] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:06:05.938 [2024-11-27 19:04:15.338700] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58216 ] 00:06:05.938 [2024-11-27 19:04:15.520056] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.197 [2024-11-27 19:04:15.660463] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.573 test_start 00:06:07.573 oneshot 00:06:07.573 tick 100 00:06:07.573 tick 100 00:06:07.573 tick 250 00:06:07.573 tick 100 00:06:07.573 tick 100 00:06:07.573 tick 100 00:06:07.573 tick 250 00:06:07.573 tick 500 00:06:07.573 tick 100 00:06:07.573 tick 100 00:06:07.573 tick 250 00:06:07.573 tick 100 00:06:07.573 tick 100 00:06:07.573 test_end 00:06:07.573 00:06:07.573 real 0m1.612s 00:06:07.573 user 0m1.380s 00:06:07.573 sys 0m0.122s 00:06:07.573 19:04:16 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:07.573 19:04:16 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:07.573 ************************************ 00:06:07.573 END TEST event_reactor 00:06:07.573 ************************************ 00:06:07.573 19:04:16 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:07.573 19:04:16 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:07.573 19:04:16 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:07.573 19:04:16 event -- common/autotest_common.sh@10 -- # set +x 00:06:07.573 ************************************ 00:06:07.573 START TEST event_reactor_perf 00:06:07.573 ************************************ 00:06:07.573 19:04:16 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:07.573 [2024-11-27 19:04:17.015379] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:06:07.573 [2024-11-27 19:04:17.015525] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58252 ] 00:06:07.573 [2024-11-27 19:04:17.190323] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.832 [2024-11-27 19:04:17.321592] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.213 test_start 00:06:09.213 test_end 00:06:09.213 Performance: 402318 events per second 00:06:09.213 00:06:09.213 real 0m1.596s 00:06:09.213 user 0m1.375s 00:06:09.213 sys 0m0.113s 00:06:09.213 19:04:18 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:09.213 ************************************ 00:06:09.213 END TEST event_reactor_perf 00:06:09.213 ************************************ 00:06:09.213 19:04:18 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:09.213 19:04:18 event -- event/event.sh@49 -- # uname -s 00:06:09.213 19:04:18 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:09.213 19:04:18 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:09.213 19:04:18 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:09.213 19:04:18 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:09.213 19:04:18 event -- common/autotest_common.sh@10 -- # set +x 00:06:09.213 ************************************ 00:06:09.213 START TEST event_scheduler 00:06:09.213 ************************************ 00:06:09.213 19:04:18 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:09.213 * Looking for test storage... 00:06:09.213 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:06:09.213 19:04:18 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:09.213 19:04:18 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:06:09.213 19:04:18 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:09.472 19:04:18 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:09.472 19:04:18 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:09.472 19:04:18 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:09.472 19:04:18 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:09.472 19:04:18 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:06:09.472 19:04:18 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:06:09.473 19:04:18 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:06:09.473 19:04:18 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:06:09.473 19:04:18 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:06:09.473 19:04:18 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:06:09.473 19:04:18 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:06:09.473 19:04:18 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:09.473 19:04:18 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:06:09.473 19:04:18 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:06:09.473 19:04:18 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:09.473 19:04:18 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:09.473 19:04:18 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:06:09.473 19:04:18 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:06:09.473 19:04:18 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:09.473 19:04:18 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:06:09.473 19:04:18 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:06:09.473 19:04:18 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:06:09.473 19:04:18 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:06:09.473 19:04:18 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:09.473 19:04:18 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:06:09.473 19:04:18 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:06:09.473 19:04:18 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:09.473 19:04:18 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:09.473 19:04:18 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:06:09.473 19:04:18 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:09.473 19:04:18 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:09.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.473 --rc genhtml_branch_coverage=1 00:06:09.473 --rc genhtml_function_coverage=1 00:06:09.473 --rc genhtml_legend=1 00:06:09.473 --rc geninfo_all_blocks=1 00:06:09.473 --rc geninfo_unexecuted_blocks=1 00:06:09.473 00:06:09.473 ' 00:06:09.473 19:04:18 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:09.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.473 --rc genhtml_branch_coverage=1 00:06:09.473 --rc genhtml_function_coverage=1 00:06:09.473 --rc genhtml_legend=1 00:06:09.473 --rc geninfo_all_blocks=1 00:06:09.473 --rc geninfo_unexecuted_blocks=1 00:06:09.473 00:06:09.473 ' 00:06:09.473 19:04:18 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:09.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.473 --rc genhtml_branch_coverage=1 00:06:09.473 --rc genhtml_function_coverage=1 00:06:09.473 --rc genhtml_legend=1 00:06:09.473 --rc geninfo_all_blocks=1 00:06:09.473 --rc geninfo_unexecuted_blocks=1 00:06:09.473 00:06:09.473 ' 00:06:09.473 19:04:18 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:09.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.473 --rc genhtml_branch_coverage=1 00:06:09.473 --rc genhtml_function_coverage=1 00:06:09.473 --rc genhtml_legend=1 00:06:09.473 --rc geninfo_all_blocks=1 00:06:09.473 --rc geninfo_unexecuted_blocks=1 00:06:09.473 00:06:09.473 ' 00:06:09.473 19:04:18 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:09.473 19:04:18 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58323 00:06:09.473 19:04:18 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:09.473 19:04:18 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:09.473 19:04:18 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58323 00:06:09.473 19:04:18 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 58323 ']' 00:06:09.473 19:04:18 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:09.473 19:04:18 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:09.473 19:04:18 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:09.473 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:09.473 19:04:18 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:09.473 19:04:18 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:09.473 [2024-11-27 19:04:18.956841] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:06:09.473 [2024-11-27 19:04:18.957063] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58323 ] 00:06:09.731 [2024-11-27 19:04:19.133975] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:09.731 [2024-11-27 19:04:19.280566] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.731 [2024-11-27 19:04:19.280838] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:09.731 [2024-11-27 19:04:19.281073] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:09.731 [2024-11-27 19:04:19.281177] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:10.298 19:04:19 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:10.298 19:04:19 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:06:10.298 19:04:19 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:10.298 19:04:19 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:10.298 19:04:19 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:10.298 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:10.298 POWER: Cannot set governor of lcore 0 to userspace 00:06:10.298 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:10.298 POWER: Cannot set governor of lcore 0 to performance 00:06:10.298 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:10.298 POWER: Cannot set governor of lcore 0 to userspace 00:06:10.298 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:10.298 POWER: Cannot set governor of lcore 0 to userspace 00:06:10.298 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:06:10.298 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:06:10.298 POWER: Unable to set Power Management Environment for lcore 0 00:06:10.298 [2024-11-27 19:04:19.813661] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:06:10.298 [2024-11-27 19:04:19.813685] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:06:10.298 [2024-11-27 19:04:19.813707] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:06:10.298 [2024-11-27 19:04:19.813729] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:10.298 [2024-11-27 19:04:19.813738] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:10.298 [2024-11-27 19:04:19.813749] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:10.298 19:04:19 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:10.298 19:04:19 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:10.298 19:04:19 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:10.298 19:04:19 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:10.868 [2024-11-27 19:04:20.209180] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:10.868 19:04:20 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:10.868 19:04:20 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:10.868 19:04:20 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:10.868 19:04:20 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:10.868 19:04:20 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:10.868 ************************************ 00:06:10.868 START TEST scheduler_create_thread 00:06:10.868 ************************************ 00:06:10.868 19:04:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:06:10.868 19:04:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:10.868 19:04:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:10.868 19:04:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:10.868 2 00:06:10.868 19:04:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:10.868 19:04:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:10.868 19:04:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:10.868 19:04:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:10.868 3 00:06:10.868 19:04:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:10.868 19:04:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:10.868 19:04:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:10.868 19:04:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:10.868 4 00:06:10.868 19:04:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:10.868 19:04:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:10.868 19:04:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:10.868 19:04:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:10.868 5 00:06:10.868 19:04:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:10.868 19:04:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:10.868 19:04:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:10.868 19:04:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:10.868 6 00:06:10.868 19:04:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:10.868 19:04:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:10.868 19:04:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:10.868 19:04:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:10.868 7 00:06:10.868 19:04:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:10.868 19:04:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:10.868 19:04:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:10.868 19:04:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:10.868 8 00:06:10.868 19:04:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:10.868 19:04:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:10.868 19:04:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:10.868 19:04:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:10.868 9 00:06:10.868 19:04:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:10.868 19:04:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:10.868 19:04:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:10.868 19:04:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:10.868 10 00:06:10.868 19:04:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:10.868 19:04:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:10.868 19:04:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:10.868 19:04:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:12.248 19:04:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:12.249 19:04:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:12.249 19:04:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:12.249 19:04:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:12.249 19:04:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:13.186 19:04:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:13.186 19:04:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:13.186 19:04:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:13.186 19:04:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:13.754 19:04:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:13.754 19:04:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:13.754 19:04:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:13.754 19:04:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:13.754 19:04:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:14.689 19:04:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:14.689 00:06:14.689 real 0m3.885s 00:06:14.689 user 0m0.027s 00:06:14.689 sys 0m0.011s 00:06:14.689 ************************************ 00:06:14.689 END TEST scheduler_create_thread 00:06:14.689 ************************************ 00:06:14.689 19:04:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:14.689 19:04:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:14.689 19:04:24 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:14.689 19:04:24 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58323 00:06:14.689 19:04:24 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 58323 ']' 00:06:14.689 19:04:24 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 58323 00:06:14.689 19:04:24 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:06:14.689 19:04:24 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:14.689 19:04:24 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58323 00:06:14.689 killing process with pid 58323 00:06:14.689 19:04:24 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:14.689 19:04:24 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:14.689 19:04:24 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58323' 00:06:14.689 19:04:24 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 58323 00:06:14.689 19:04:24 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 58323 00:06:14.948 [2024-11-27 19:04:24.488306] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:16.327 00:06:16.327 real 0m7.102s 00:06:16.327 user 0m14.531s 00:06:16.327 sys 0m0.609s 00:06:16.327 ************************************ 00:06:16.327 END TEST event_scheduler 00:06:16.327 ************************************ 00:06:16.327 19:04:25 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:16.327 19:04:25 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:16.327 19:04:25 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:16.327 19:04:25 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:16.327 19:04:25 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:16.327 19:04:25 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:16.327 19:04:25 event -- common/autotest_common.sh@10 -- # set +x 00:06:16.327 ************************************ 00:06:16.327 START TEST app_repeat 00:06:16.327 ************************************ 00:06:16.327 19:04:25 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:06:16.327 19:04:25 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:16.327 19:04:25 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:16.327 19:04:25 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:16.327 19:04:25 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:16.327 19:04:25 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:16.327 19:04:25 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:16.327 19:04:25 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:16.327 19:04:25 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58451 00:06:16.327 19:04:25 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:16.327 19:04:25 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:16.327 19:04:25 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58451' 00:06:16.327 Process app_repeat pid: 58451 00:06:16.327 spdk_app_start Round 0 00:06:16.327 19:04:25 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:16.327 19:04:25 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:16.327 19:04:25 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58451 /var/tmp/spdk-nbd.sock 00:06:16.327 19:04:25 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58451 ']' 00:06:16.327 19:04:25 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:16.327 19:04:25 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:16.327 19:04:25 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:16.327 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:16.327 19:04:25 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:16.327 19:04:25 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:16.327 [2024-11-27 19:04:25.885970] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:06:16.327 [2024-11-27 19:04:25.886210] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58451 ] 00:06:16.588 [2024-11-27 19:04:26.058286] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:16.588 [2024-11-27 19:04:26.191971] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.588 [2024-11-27 19:04:26.192019] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:17.155 19:04:26 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:17.155 19:04:26 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:17.155 19:04:26 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:17.414 Malloc0 00:06:17.414 19:04:27 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:17.990 Malloc1 00:06:17.990 19:04:27 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:17.990 19:04:27 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:17.990 19:04:27 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:17.990 19:04:27 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:17.990 19:04:27 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:17.990 19:04:27 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:17.990 19:04:27 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:17.990 19:04:27 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:17.990 19:04:27 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:17.990 19:04:27 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:17.990 19:04:27 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:17.990 19:04:27 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:17.990 19:04:27 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:17.990 19:04:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:17.990 19:04:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:17.990 19:04:27 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:17.990 /dev/nbd0 00:06:17.990 19:04:27 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:17.991 19:04:27 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:17.991 19:04:27 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:17.991 19:04:27 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:17.991 19:04:27 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:17.991 19:04:27 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:17.991 19:04:27 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:17.991 19:04:27 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:17.991 19:04:27 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:17.991 19:04:27 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:17.991 19:04:27 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:17.991 1+0 records in 00:06:17.991 1+0 records out 00:06:17.991 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000489053 s, 8.4 MB/s 00:06:17.991 19:04:27 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:17.991 19:04:27 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:17.991 19:04:27 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:17.991 19:04:27 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:17.991 19:04:27 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:17.991 19:04:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:17.991 19:04:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:17.991 19:04:27 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:18.250 /dev/nbd1 00:06:18.250 19:04:27 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:18.250 19:04:27 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:18.250 19:04:27 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:18.250 19:04:27 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:18.250 19:04:27 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:18.250 19:04:27 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:18.250 19:04:27 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:18.250 19:04:27 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:18.250 19:04:27 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:18.250 19:04:27 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:18.250 19:04:27 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:18.250 1+0 records in 00:06:18.250 1+0 records out 00:06:18.250 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000367491 s, 11.1 MB/s 00:06:18.250 19:04:27 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:18.250 19:04:27 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:18.250 19:04:27 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:18.250 19:04:27 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:18.250 19:04:27 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:18.250 19:04:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:18.250 19:04:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:18.250 19:04:27 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:18.250 19:04:27 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:18.250 19:04:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:18.509 19:04:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:18.509 { 00:06:18.509 "nbd_device": "/dev/nbd0", 00:06:18.509 "bdev_name": "Malloc0" 00:06:18.509 }, 00:06:18.509 { 00:06:18.509 "nbd_device": "/dev/nbd1", 00:06:18.509 "bdev_name": "Malloc1" 00:06:18.509 } 00:06:18.509 ]' 00:06:18.509 19:04:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:18.509 { 00:06:18.509 "nbd_device": "/dev/nbd0", 00:06:18.509 "bdev_name": "Malloc0" 00:06:18.509 }, 00:06:18.509 { 00:06:18.509 "nbd_device": "/dev/nbd1", 00:06:18.509 "bdev_name": "Malloc1" 00:06:18.509 } 00:06:18.509 ]' 00:06:18.509 19:04:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:18.509 19:04:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:18.509 /dev/nbd1' 00:06:18.509 19:04:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:18.509 /dev/nbd1' 00:06:18.509 19:04:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:18.509 19:04:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:18.509 19:04:28 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:18.509 19:04:28 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:18.509 19:04:28 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:18.509 19:04:28 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:18.509 19:04:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:18.509 19:04:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:18.509 19:04:28 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:18.509 19:04:28 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:18.509 19:04:28 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:18.509 19:04:28 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:18.509 256+0 records in 00:06:18.509 256+0 records out 00:06:18.509 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0124754 s, 84.1 MB/s 00:06:18.509 19:04:28 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:18.509 19:04:28 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:18.768 256+0 records in 00:06:18.768 256+0 records out 00:06:18.768 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.023882 s, 43.9 MB/s 00:06:18.768 19:04:28 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:18.768 19:04:28 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:18.768 256+0 records in 00:06:18.768 256+0 records out 00:06:18.768 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0305759 s, 34.3 MB/s 00:06:18.768 19:04:28 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:18.768 19:04:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:18.768 19:04:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:18.768 19:04:28 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:18.768 19:04:28 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:18.768 19:04:28 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:18.768 19:04:28 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:18.768 19:04:28 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:18.768 19:04:28 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:18.768 19:04:28 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:18.768 19:04:28 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:18.768 19:04:28 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:18.768 19:04:28 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:18.768 19:04:28 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:18.768 19:04:28 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:18.768 19:04:28 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:18.768 19:04:28 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:18.768 19:04:28 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:18.768 19:04:28 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:19.028 19:04:28 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:19.028 19:04:28 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:19.028 19:04:28 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:19.028 19:04:28 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:19.028 19:04:28 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:19.028 19:04:28 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:19.028 19:04:28 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:19.028 19:04:28 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:19.028 19:04:28 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:19.028 19:04:28 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:19.028 19:04:28 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:19.287 19:04:28 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:19.287 19:04:28 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:19.287 19:04:28 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:19.287 19:04:28 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:19.287 19:04:28 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:19.287 19:04:28 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:19.287 19:04:28 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:19.287 19:04:28 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:19.287 19:04:28 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:19.287 19:04:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:19.287 19:04:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:19.287 19:04:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:19.287 19:04:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:19.547 19:04:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:19.547 19:04:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:19.547 19:04:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:19.547 19:04:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:19.547 19:04:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:19.547 19:04:28 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:19.547 19:04:28 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:19.547 19:04:28 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:19.547 19:04:28 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:19.547 19:04:28 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:19.804 19:04:29 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:21.180 [2024-11-27 19:04:30.628090] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:21.180 [2024-11-27 19:04:30.760009] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.180 [2024-11-27 19:04:30.760013] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:21.438 [2024-11-27 19:04:30.981971] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:21.438 [2024-11-27 19:04:30.982092] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:22.812 19:04:32 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:22.812 spdk_app_start Round 1 00:06:22.812 19:04:32 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:22.812 19:04:32 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58451 /var/tmp/spdk-nbd.sock 00:06:22.812 19:04:32 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58451 ']' 00:06:22.812 19:04:32 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:22.812 19:04:32 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:22.812 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:22.812 19:04:32 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:22.812 19:04:32 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:22.812 19:04:32 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:23.069 19:04:32 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:23.069 19:04:32 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:23.069 19:04:32 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:23.327 Malloc0 00:06:23.327 19:04:32 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:23.584 Malloc1 00:06:23.584 19:04:33 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:23.584 19:04:33 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:23.584 19:04:33 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:23.584 19:04:33 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:23.584 19:04:33 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:23.584 19:04:33 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:23.584 19:04:33 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:23.584 19:04:33 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:23.584 19:04:33 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:23.584 19:04:33 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:23.584 19:04:33 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:23.584 19:04:33 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:23.584 19:04:33 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:23.584 19:04:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:23.584 19:04:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:23.584 19:04:33 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:23.855 /dev/nbd0 00:06:23.855 19:04:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:23.855 19:04:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:23.855 19:04:33 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:23.855 19:04:33 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:23.855 19:04:33 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:23.855 19:04:33 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:23.855 19:04:33 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:23.855 19:04:33 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:23.855 19:04:33 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:23.855 19:04:33 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:23.855 19:04:33 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:23.855 1+0 records in 00:06:23.855 1+0 records out 00:06:23.855 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000329982 s, 12.4 MB/s 00:06:23.855 19:04:33 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:23.855 19:04:33 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:23.855 19:04:33 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:23.855 19:04:33 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:23.855 19:04:33 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:23.855 19:04:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:23.855 19:04:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:23.855 19:04:33 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:24.135 /dev/nbd1 00:06:24.135 19:04:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:24.135 19:04:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:24.135 19:04:33 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:24.135 19:04:33 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:24.135 19:04:33 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:24.135 19:04:33 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:24.135 19:04:33 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:24.135 19:04:33 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:24.135 19:04:33 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:24.135 19:04:33 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:24.136 19:04:33 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:24.136 1+0 records in 00:06:24.136 1+0 records out 00:06:24.136 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000333556 s, 12.3 MB/s 00:06:24.136 19:04:33 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:24.136 19:04:33 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:24.136 19:04:33 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:24.136 19:04:33 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:24.136 19:04:33 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:24.136 19:04:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:24.136 19:04:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:24.136 19:04:33 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:24.136 19:04:33 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:24.136 19:04:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:24.394 19:04:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:24.394 { 00:06:24.394 "nbd_device": "/dev/nbd0", 00:06:24.394 "bdev_name": "Malloc0" 00:06:24.394 }, 00:06:24.394 { 00:06:24.394 "nbd_device": "/dev/nbd1", 00:06:24.394 "bdev_name": "Malloc1" 00:06:24.394 } 00:06:24.394 ]' 00:06:24.394 19:04:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:24.394 19:04:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:24.394 { 00:06:24.394 "nbd_device": "/dev/nbd0", 00:06:24.394 "bdev_name": "Malloc0" 00:06:24.394 }, 00:06:24.394 { 00:06:24.394 "nbd_device": "/dev/nbd1", 00:06:24.394 "bdev_name": "Malloc1" 00:06:24.394 } 00:06:24.394 ]' 00:06:24.394 19:04:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:24.394 /dev/nbd1' 00:06:24.394 19:04:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:24.394 /dev/nbd1' 00:06:24.394 19:04:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:24.394 19:04:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:24.394 19:04:33 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:24.394 19:04:33 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:24.394 19:04:33 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:24.394 19:04:33 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:24.394 19:04:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:24.394 19:04:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:24.394 19:04:33 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:24.394 19:04:33 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:24.394 19:04:33 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:24.394 19:04:33 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:24.394 256+0 records in 00:06:24.394 256+0 records out 00:06:24.394 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0146408 s, 71.6 MB/s 00:06:24.394 19:04:33 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:24.394 19:04:33 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:24.394 256+0 records in 00:06:24.394 256+0 records out 00:06:24.394 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0233812 s, 44.8 MB/s 00:06:24.394 19:04:33 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:24.394 19:04:33 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:24.394 256+0 records in 00:06:24.394 256+0 records out 00:06:24.394 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0259256 s, 40.4 MB/s 00:06:24.394 19:04:33 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:24.394 19:04:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:24.394 19:04:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:24.394 19:04:33 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:24.394 19:04:33 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:24.394 19:04:33 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:24.394 19:04:33 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:24.394 19:04:33 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:24.394 19:04:33 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:24.394 19:04:33 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:24.394 19:04:33 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:24.394 19:04:33 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:24.394 19:04:33 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:24.394 19:04:33 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:24.394 19:04:33 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:24.394 19:04:33 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:24.394 19:04:33 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:24.394 19:04:33 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:24.394 19:04:33 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:24.652 19:04:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:24.652 19:04:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:24.652 19:04:34 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:24.652 19:04:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:24.652 19:04:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:24.652 19:04:34 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:24.652 19:04:34 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:24.652 19:04:34 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:24.652 19:04:34 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:24.652 19:04:34 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:24.911 19:04:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:24.911 19:04:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:24.911 19:04:34 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:24.911 19:04:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:24.911 19:04:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:24.911 19:04:34 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:24.911 19:04:34 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:24.911 19:04:34 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:24.911 19:04:34 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:24.911 19:04:34 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:24.911 19:04:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:25.169 19:04:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:25.169 19:04:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:25.169 19:04:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:25.169 19:04:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:25.169 19:04:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:25.169 19:04:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:25.169 19:04:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:25.169 19:04:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:25.169 19:04:34 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:25.169 19:04:34 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:25.169 19:04:34 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:25.169 19:04:34 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:25.169 19:04:34 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:25.734 19:04:35 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:27.110 [2024-11-27 19:04:36.415369] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:27.110 [2024-11-27 19:04:36.554549] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.110 [2024-11-27 19:04:36.554579] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:27.368 [2024-11-27 19:04:36.781473] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:27.368 [2024-11-27 19:04:36.781610] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:28.743 spdk_app_start Round 2 00:06:28.743 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:28.743 19:04:38 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:28.743 19:04:38 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:28.743 19:04:38 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58451 /var/tmp/spdk-nbd.sock 00:06:28.743 19:04:38 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58451 ']' 00:06:28.743 19:04:38 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:28.743 19:04:38 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:28.743 19:04:38 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:28.743 19:04:38 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:28.743 19:04:38 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:28.743 19:04:38 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:28.743 19:04:38 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:28.743 19:04:38 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:29.001 Malloc0 00:06:29.260 19:04:38 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:29.520 Malloc1 00:06:29.520 19:04:38 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:29.520 19:04:38 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:29.520 19:04:38 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:29.520 19:04:38 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:29.520 19:04:38 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:29.520 19:04:38 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:29.520 19:04:38 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:29.520 19:04:38 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:29.520 19:04:38 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:29.520 19:04:38 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:29.520 19:04:38 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:29.520 19:04:38 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:29.520 19:04:38 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:29.520 19:04:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:29.520 19:04:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:29.520 19:04:38 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:29.520 /dev/nbd0 00:06:29.520 19:04:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:29.520 19:04:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:29.520 19:04:39 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:29.520 19:04:39 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:29.520 19:04:39 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:29.520 19:04:39 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:29.520 19:04:39 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:29.520 19:04:39 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:29.520 19:04:39 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:29.520 19:04:39 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:29.520 19:04:39 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:29.520 1+0 records in 00:06:29.520 1+0 records out 00:06:29.520 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000497987 s, 8.2 MB/s 00:06:29.520 19:04:39 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:29.520 19:04:39 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:29.520 19:04:39 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:29.520 19:04:39 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:29.520 19:04:39 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:29.520 19:04:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:29.520 19:04:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:29.520 19:04:39 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:29.778 /dev/nbd1 00:06:29.778 19:04:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:29.778 19:04:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:29.778 19:04:39 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:29.778 19:04:39 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:29.778 19:04:39 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:29.778 19:04:39 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:29.778 19:04:39 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:29.778 19:04:39 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:29.778 19:04:39 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:29.778 19:04:39 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:29.778 19:04:39 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:29.778 1+0 records in 00:06:29.778 1+0 records out 00:06:29.778 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000202961 s, 20.2 MB/s 00:06:29.778 19:04:39 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:29.778 19:04:39 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:29.778 19:04:39 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:29.778 19:04:39 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:29.778 19:04:39 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:29.778 19:04:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:29.778 19:04:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:29.778 19:04:39 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:29.778 19:04:39 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:29.778 19:04:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:30.035 19:04:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:30.035 { 00:06:30.035 "nbd_device": "/dev/nbd0", 00:06:30.035 "bdev_name": "Malloc0" 00:06:30.035 }, 00:06:30.035 { 00:06:30.035 "nbd_device": "/dev/nbd1", 00:06:30.035 "bdev_name": "Malloc1" 00:06:30.035 } 00:06:30.035 ]' 00:06:30.035 19:04:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:30.035 19:04:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:30.035 { 00:06:30.035 "nbd_device": "/dev/nbd0", 00:06:30.035 "bdev_name": "Malloc0" 00:06:30.035 }, 00:06:30.035 { 00:06:30.035 "nbd_device": "/dev/nbd1", 00:06:30.035 "bdev_name": "Malloc1" 00:06:30.035 } 00:06:30.035 ]' 00:06:30.035 19:04:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:30.035 /dev/nbd1' 00:06:30.035 19:04:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:30.035 /dev/nbd1' 00:06:30.035 19:04:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:30.035 19:04:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:30.035 19:04:39 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:30.035 19:04:39 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:30.035 19:04:39 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:30.035 19:04:39 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:30.035 19:04:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:30.035 19:04:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:30.035 19:04:39 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:30.035 19:04:39 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:30.035 19:04:39 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:30.035 19:04:39 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:30.035 256+0 records in 00:06:30.035 256+0 records out 00:06:30.035 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0140801 s, 74.5 MB/s 00:06:30.035 19:04:39 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:30.035 19:04:39 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:30.293 256+0 records in 00:06:30.293 256+0 records out 00:06:30.293 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0213916 s, 49.0 MB/s 00:06:30.293 19:04:39 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:30.293 19:04:39 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:30.293 256+0 records in 00:06:30.293 256+0 records out 00:06:30.293 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0250854 s, 41.8 MB/s 00:06:30.293 19:04:39 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:30.293 19:04:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:30.293 19:04:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:30.294 19:04:39 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:30.294 19:04:39 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:30.294 19:04:39 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:30.294 19:04:39 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:30.294 19:04:39 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:30.294 19:04:39 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:30.294 19:04:39 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:30.294 19:04:39 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:30.294 19:04:39 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:30.294 19:04:39 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:30.294 19:04:39 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:30.294 19:04:39 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:30.294 19:04:39 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:30.294 19:04:39 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:30.294 19:04:39 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:30.294 19:04:39 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:30.553 19:04:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:30.553 19:04:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:30.553 19:04:39 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:30.553 19:04:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:30.553 19:04:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:30.553 19:04:39 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:30.553 19:04:39 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:30.553 19:04:39 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:30.553 19:04:39 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:30.553 19:04:39 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:30.553 19:04:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:30.553 19:04:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:30.553 19:04:40 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:30.553 19:04:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:30.553 19:04:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:30.553 19:04:40 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:30.553 19:04:40 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:30.553 19:04:40 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:30.553 19:04:40 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:30.553 19:04:40 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:30.553 19:04:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:30.812 19:04:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:30.812 19:04:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:30.812 19:04:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:30.812 19:04:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:31.070 19:04:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:31.070 19:04:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:31.070 19:04:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:31.070 19:04:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:31.070 19:04:40 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:31.070 19:04:40 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:31.070 19:04:40 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:31.070 19:04:40 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:31.070 19:04:40 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:31.378 19:04:40 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:32.760 [2024-11-27 19:04:42.128174] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:32.760 [2024-11-27 19:04:42.263554] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.760 [2024-11-27 19:04:42.263556] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:33.020 [2024-11-27 19:04:42.487702] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:33.020 [2024-11-27 19:04:42.487800] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:34.399 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:34.399 19:04:43 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58451 /var/tmp/spdk-nbd.sock 00:06:34.399 19:04:43 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58451 ']' 00:06:34.399 19:04:43 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:34.399 19:04:43 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:34.399 19:04:43 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:34.399 19:04:43 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:34.399 19:04:43 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:34.659 19:04:44 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:34.659 19:04:44 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:34.659 19:04:44 event.app_repeat -- event/event.sh@39 -- # killprocess 58451 00:06:34.659 19:04:44 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 58451 ']' 00:06:34.659 19:04:44 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 58451 00:06:34.659 19:04:44 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:06:34.659 19:04:44 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:34.659 19:04:44 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58451 00:06:34.659 19:04:44 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:34.659 19:04:44 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:34.659 19:04:44 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58451' 00:06:34.659 killing process with pid 58451 00:06:34.659 19:04:44 event.app_repeat -- common/autotest_common.sh@973 -- # kill 58451 00:06:34.659 19:04:44 event.app_repeat -- common/autotest_common.sh@978 -- # wait 58451 00:06:35.598 spdk_app_start is called in Round 0. 00:06:35.598 Shutdown signal received, stop current app iteration 00:06:35.598 Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 reinitialization... 00:06:35.598 spdk_app_start is called in Round 1. 00:06:35.598 Shutdown signal received, stop current app iteration 00:06:35.598 Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 reinitialization... 00:06:35.598 spdk_app_start is called in Round 2. 00:06:35.598 Shutdown signal received, stop current app iteration 00:06:35.598 Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 reinitialization... 00:06:35.598 spdk_app_start is called in Round 3. 00:06:35.598 Shutdown signal received, stop current app iteration 00:06:35.857 19:04:45 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:35.857 19:04:45 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:35.857 00:06:35.857 real 0m19.448s 00:06:35.857 user 0m41.105s 00:06:35.857 sys 0m3.081s 00:06:35.857 19:04:45 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:35.857 19:04:45 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:35.857 ************************************ 00:06:35.857 END TEST app_repeat 00:06:35.857 ************************************ 00:06:35.857 19:04:45 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:35.857 19:04:45 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:35.857 19:04:45 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:35.857 19:04:45 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:35.857 19:04:45 event -- common/autotest_common.sh@10 -- # set +x 00:06:35.857 ************************************ 00:06:35.857 START TEST cpu_locks 00:06:35.857 ************************************ 00:06:35.857 19:04:45 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:35.857 * Looking for test storage... 00:06:35.857 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:35.857 19:04:45 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:35.857 19:04:45 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:06:35.857 19:04:45 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:36.118 19:04:45 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:36.118 19:04:45 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:36.118 19:04:45 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:36.118 19:04:45 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:36.118 19:04:45 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:06:36.118 19:04:45 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:06:36.118 19:04:45 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:06:36.118 19:04:45 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:06:36.118 19:04:45 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:06:36.118 19:04:45 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:06:36.118 19:04:45 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:06:36.118 19:04:45 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:36.118 19:04:45 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:06:36.118 19:04:45 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:06:36.118 19:04:45 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:36.118 19:04:45 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:36.118 19:04:45 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:06:36.118 19:04:45 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:06:36.118 19:04:45 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:36.118 19:04:45 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:06:36.118 19:04:45 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:06:36.118 19:04:45 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:06:36.118 19:04:45 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:06:36.118 19:04:45 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:36.118 19:04:45 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:06:36.118 19:04:45 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:06:36.118 19:04:45 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:36.118 19:04:45 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:36.118 19:04:45 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:06:36.118 19:04:45 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:36.118 19:04:45 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:36.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:36.118 --rc genhtml_branch_coverage=1 00:06:36.118 --rc genhtml_function_coverage=1 00:06:36.118 --rc genhtml_legend=1 00:06:36.118 --rc geninfo_all_blocks=1 00:06:36.118 --rc geninfo_unexecuted_blocks=1 00:06:36.118 00:06:36.118 ' 00:06:36.118 19:04:45 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:36.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:36.118 --rc genhtml_branch_coverage=1 00:06:36.118 --rc genhtml_function_coverage=1 00:06:36.118 --rc genhtml_legend=1 00:06:36.118 --rc geninfo_all_blocks=1 00:06:36.118 --rc geninfo_unexecuted_blocks=1 00:06:36.118 00:06:36.118 ' 00:06:36.118 19:04:45 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:36.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:36.118 --rc genhtml_branch_coverage=1 00:06:36.118 --rc genhtml_function_coverage=1 00:06:36.118 --rc genhtml_legend=1 00:06:36.118 --rc geninfo_all_blocks=1 00:06:36.118 --rc geninfo_unexecuted_blocks=1 00:06:36.118 00:06:36.118 ' 00:06:36.118 19:04:45 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:36.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:36.118 --rc genhtml_branch_coverage=1 00:06:36.118 --rc genhtml_function_coverage=1 00:06:36.118 --rc genhtml_legend=1 00:06:36.118 --rc geninfo_all_blocks=1 00:06:36.118 --rc geninfo_unexecuted_blocks=1 00:06:36.118 00:06:36.118 ' 00:06:36.118 19:04:45 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:36.118 19:04:45 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:36.118 19:04:45 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:36.118 19:04:45 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:36.118 19:04:45 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:36.118 19:04:45 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:36.118 19:04:45 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:36.118 ************************************ 00:06:36.118 START TEST default_locks 00:06:36.118 ************************************ 00:06:36.118 19:04:45 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:06:36.118 19:04:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=58898 00:06:36.118 19:04:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:36.118 19:04:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 58898 00:06:36.118 19:04:45 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58898 ']' 00:06:36.118 19:04:45 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:36.118 19:04:45 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:36.118 19:04:45 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:36.118 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:36.118 19:04:45 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:36.118 19:04:45 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:36.118 [2024-11-27 19:04:45.676133] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:06:36.118 [2024-11-27 19:04:45.676333] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58898 ] 00:06:36.378 [2024-11-27 19:04:45.850292] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.378 [2024-11-27 19:04:45.990329] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.754 19:04:46 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:37.754 19:04:46 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:06:37.754 19:04:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 58898 00:06:37.754 19:04:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 58898 00:06:37.754 19:04:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:37.754 19:04:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 58898 00:06:37.754 19:04:47 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 58898 ']' 00:06:37.754 19:04:47 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 58898 00:06:37.754 19:04:47 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:06:37.754 19:04:47 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:37.754 19:04:47 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58898 00:06:37.754 killing process with pid 58898 00:06:37.754 19:04:47 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:37.754 19:04:47 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:37.754 19:04:47 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58898' 00:06:37.754 19:04:47 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 58898 00:06:37.754 19:04:47 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 58898 00:06:40.289 19:04:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 58898 00:06:40.289 19:04:49 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:06:40.289 19:04:49 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 58898 00:06:40.289 19:04:49 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:40.289 19:04:49 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:40.289 19:04:49 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:40.289 19:04:49 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:40.289 19:04:49 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 58898 00:06:40.289 19:04:49 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58898 ']' 00:06:40.289 19:04:49 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:40.289 19:04:49 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:40.289 19:04:49 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:40.289 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:40.289 19:04:49 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:40.289 19:04:49 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:40.289 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (58898) - No such process 00:06:40.289 ERROR: process (pid: 58898) is no longer running 00:06:40.289 19:04:49 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:40.289 19:04:49 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:06:40.289 19:04:49 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:06:40.289 19:04:49 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:40.289 19:04:49 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:40.289 19:04:49 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:40.289 19:04:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:40.289 19:04:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:40.289 19:04:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:40.289 19:04:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:40.289 00:06:40.289 real 0m4.314s 00:06:40.289 user 0m4.088s 00:06:40.289 sys 0m0.759s 00:06:40.289 19:04:49 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:40.289 19:04:49 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:40.289 ************************************ 00:06:40.289 END TEST default_locks 00:06:40.289 ************************************ 00:06:40.548 19:04:49 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:40.548 19:04:49 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:40.548 19:04:49 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:40.548 19:04:49 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:40.548 ************************************ 00:06:40.548 START TEST default_locks_via_rpc 00:06:40.548 ************************************ 00:06:40.548 19:04:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:06:40.548 19:04:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=58975 00:06:40.548 19:04:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:40.548 19:04:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 58975 00:06:40.548 19:04:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 58975 ']' 00:06:40.548 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:40.548 19:04:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:40.548 19:04:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:40.548 19:04:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:40.548 19:04:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:40.548 19:04:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:40.548 [2024-11-27 19:04:50.055452] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:06:40.549 [2024-11-27 19:04:50.055633] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58975 ] 00:06:40.807 [2024-11-27 19:04:50.215554] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.807 [2024-11-27 19:04:50.346834] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.742 19:04:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:41.742 19:04:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:41.742 19:04:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:41.742 19:04:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:41.742 19:04:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:41.742 19:04:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:41.742 19:04:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:41.742 19:04:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:41.742 19:04:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:41.742 19:04:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:41.742 19:04:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:41.742 19:04:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:41.742 19:04:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:41.742 19:04:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:41.742 19:04:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 58975 00:06:41.742 19:04:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 58975 00:06:41.742 19:04:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:42.000 19:04:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 58975 00:06:42.001 19:04:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 58975 ']' 00:06:42.001 19:04:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 58975 00:06:42.001 19:04:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:06:42.259 19:04:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:42.259 19:04:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58975 00:06:42.259 killing process with pid 58975 00:06:42.259 19:04:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:42.259 19:04:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:42.259 19:04:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58975' 00:06:42.259 19:04:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 58975 00:06:42.259 19:04:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 58975 00:06:44.792 ************************************ 00:06:44.792 END TEST default_locks_via_rpc 00:06:44.792 ************************************ 00:06:44.792 00:06:44.792 real 0m4.280s 00:06:44.792 user 0m4.017s 00:06:44.792 sys 0m0.777s 00:06:44.792 19:04:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:44.792 19:04:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:44.792 19:04:54 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:44.792 19:04:54 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:44.792 19:04:54 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:44.792 19:04:54 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:44.792 ************************************ 00:06:44.792 START TEST non_locking_app_on_locked_coremask 00:06:44.792 ************************************ 00:06:44.792 19:04:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:06:44.792 19:04:54 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=59055 00:06:44.792 19:04:54 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:44.792 19:04:54 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 59055 /var/tmp/spdk.sock 00:06:44.792 19:04:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59055 ']' 00:06:44.792 19:04:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:44.792 19:04:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:44.792 19:04:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:44.792 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:44.792 19:04:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:44.792 19:04:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:44.792 [2024-11-27 19:04:54.402875] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:06:44.792 [2024-11-27 19:04:54.403103] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59055 ] 00:06:45.051 [2024-11-27 19:04:54.556742] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.309 [2024-11-27 19:04:54.695744] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.246 19:04:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:46.246 19:04:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:46.246 19:04:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:46.246 19:04:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=59071 00:06:46.246 19:04:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 59071 /var/tmp/spdk2.sock 00:06:46.246 19:04:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59071 ']' 00:06:46.246 19:04:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:46.246 19:04:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:46.246 19:04:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:46.246 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:46.246 19:04:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:46.246 19:04:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:46.246 [2024-11-27 19:04:55.781821] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:06:46.246 [2024-11-27 19:04:55.782531] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59071 ] 00:06:46.504 [2024-11-27 19:04:55.955641] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:46.504 [2024-11-27 19:04:55.955699] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.763 [2024-11-27 19:04:56.234071] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.295 19:04:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:49.295 19:04:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:49.295 19:04:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 59055 00:06:49.295 19:04:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59055 00:06:49.295 19:04:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:49.295 19:04:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 59055 00:06:49.295 19:04:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59055 ']' 00:06:49.295 19:04:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59055 00:06:49.295 19:04:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:49.295 19:04:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:49.295 19:04:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59055 00:06:49.295 killing process with pid 59055 00:06:49.295 19:04:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:49.295 19:04:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:49.295 19:04:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59055' 00:06:49.295 19:04:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59055 00:06:49.295 19:04:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59055 00:06:54.622 19:05:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 59071 00:06:54.622 19:05:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59071 ']' 00:06:54.622 19:05:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59071 00:06:54.622 19:05:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:54.622 19:05:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:54.622 19:05:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59071 00:06:54.622 killing process with pid 59071 00:06:54.622 19:05:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:54.622 19:05:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:54.622 19:05:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59071' 00:06:54.622 19:05:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59071 00:06:54.622 19:05:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59071 00:06:57.154 00:06:57.154 real 0m12.162s 00:06:57.154 user 0m12.031s 00:06:57.154 sys 0m1.584s 00:06:57.154 19:05:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:57.154 ************************************ 00:06:57.154 END TEST non_locking_app_on_locked_coremask 00:06:57.154 ************************************ 00:06:57.154 19:05:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:57.154 19:05:06 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:57.154 19:05:06 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:57.154 19:05:06 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:57.154 19:05:06 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:57.154 ************************************ 00:06:57.154 START TEST locking_app_on_unlocked_coremask 00:06:57.154 ************************************ 00:06:57.154 19:05:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:06:57.154 19:05:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=59229 00:06:57.154 19:05:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:57.154 19:05:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 59229 /var/tmp/spdk.sock 00:06:57.154 19:05:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59229 ']' 00:06:57.154 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:57.154 19:05:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:57.154 19:05:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:57.154 19:05:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:57.154 19:05:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:57.154 19:05:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:57.154 [2024-11-27 19:05:06.627731] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:06:57.154 [2024-11-27 19:05:06.627840] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59229 ] 00:06:57.412 [2024-11-27 19:05:06.800678] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:57.412 [2024-11-27 19:05:06.800737] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.413 [2024-11-27 19:05:06.937320] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.347 19:05:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:58.347 19:05:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:58.347 19:05:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:58.347 19:05:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=59251 00:06:58.347 19:05:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 59251 /var/tmp/spdk2.sock 00:06:58.347 19:05:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59251 ']' 00:06:58.347 19:05:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:58.347 19:05:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:58.347 19:05:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:58.347 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:58.347 19:05:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:58.347 19:05:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:58.604 [2024-11-27 19:05:07.987162] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:06:58.604 [2024-11-27 19:05:07.987360] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59251 ] 00:06:58.604 [2024-11-27 19:05:08.153251] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.862 [2024-11-27 19:05:08.433599] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.393 19:05:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:01.393 19:05:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:01.393 19:05:10 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 59251 00:07:01.393 19:05:10 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59251 00:07:01.393 19:05:10 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:01.960 19:05:11 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 59229 00:07:01.960 19:05:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59229 ']' 00:07:01.960 19:05:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59229 00:07:01.960 19:05:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:01.960 19:05:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:01.960 19:05:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59229 00:07:01.960 killing process with pid 59229 00:07:01.960 19:05:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:01.960 19:05:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:01.960 19:05:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59229' 00:07:01.960 19:05:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59229 00:07:01.960 19:05:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59229 00:07:07.228 19:05:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 59251 00:07:07.228 19:05:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59251 ']' 00:07:07.228 19:05:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59251 00:07:07.228 19:05:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:07.228 19:05:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:07.228 19:05:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59251 00:07:07.228 killing process with pid 59251 00:07:07.228 19:05:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:07.228 19:05:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:07.228 19:05:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59251' 00:07:07.228 19:05:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59251 00:07:07.228 19:05:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59251 00:07:09.761 ************************************ 00:07:09.761 00:07:09.761 real 0m12.538s 00:07:09.761 user 0m12.377s 00:07:09.761 sys 0m1.741s 00:07:09.762 19:05:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:09.762 19:05:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:09.762 END TEST locking_app_on_unlocked_coremask 00:07:09.762 ************************************ 00:07:09.762 19:05:19 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:09.762 19:05:19 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:09.762 19:05:19 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:09.762 19:05:19 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:09.762 ************************************ 00:07:09.762 START TEST locking_app_on_locked_coremask 00:07:09.762 ************************************ 00:07:09.762 19:05:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:07:09.762 19:05:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=59405 00:07:09.762 19:05:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:09.762 19:05:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 59405 /var/tmp/spdk.sock 00:07:09.762 19:05:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59405 ']' 00:07:09.762 19:05:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:09.762 19:05:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:09.762 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:09.762 19:05:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:09.762 19:05:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:09.762 19:05:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:09.762 [2024-11-27 19:05:19.276388] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:07:09.762 [2024-11-27 19:05:19.276556] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59405 ] 00:07:10.021 [2024-11-27 19:05:19.458222] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.021 [2024-11-27 19:05:19.594535] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.396 19:05:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:11.396 19:05:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:11.396 19:05:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=59426 00:07:11.396 19:05:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:11.396 19:05:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 59426 /var/tmp/spdk2.sock 00:07:11.396 19:05:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:07:11.396 19:05:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59426 /var/tmp/spdk2.sock 00:07:11.396 19:05:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:07:11.396 19:05:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:11.396 19:05:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:07:11.396 19:05:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:11.396 19:05:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59426 /var/tmp/spdk2.sock 00:07:11.396 19:05:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59426 ']' 00:07:11.396 19:05:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:11.396 19:05:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:11.396 19:05:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:11.396 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:11.396 19:05:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:11.396 19:05:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:11.396 [2024-11-27 19:05:20.707301] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:07:11.396 [2024-11-27 19:05:20.707507] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59426 ] 00:07:11.396 [2024-11-27 19:05:20.877156] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 59405 has claimed it. 00:07:11.396 [2024-11-27 19:05:20.877238] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:11.963 ERROR: process (pid: 59426) is no longer running 00:07:11.963 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59426) - No such process 00:07:11.963 19:05:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:11.963 19:05:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:07:11.963 19:05:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:07:11.963 19:05:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:11.963 19:05:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:11.963 19:05:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:11.963 19:05:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 59405 00:07:11.963 19:05:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59405 00:07:11.963 19:05:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:12.223 19:05:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 59405 00:07:12.223 19:05:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59405 ']' 00:07:12.223 19:05:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59405 00:07:12.223 19:05:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:12.223 19:05:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:12.223 19:05:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59405 00:07:12.223 19:05:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:12.223 19:05:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:12.223 19:05:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59405' 00:07:12.223 killing process with pid 59405 00:07:12.223 19:05:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59405 00:07:12.223 19:05:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59405 00:07:14.752 00:07:14.752 real 0m5.222s 00:07:14.752 user 0m5.234s 00:07:14.752 sys 0m0.987s 00:07:14.752 19:05:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:14.752 19:05:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:14.752 ************************************ 00:07:14.752 END TEST locking_app_on_locked_coremask 00:07:14.752 ************************************ 00:07:15.012 19:05:24 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:15.012 19:05:24 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:15.012 19:05:24 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:15.012 19:05:24 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:15.012 ************************************ 00:07:15.012 START TEST locking_overlapped_coremask 00:07:15.012 ************************************ 00:07:15.012 19:05:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:07:15.012 19:05:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=59496 00:07:15.012 19:05:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:07:15.012 19:05:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 59496 /var/tmp/spdk.sock 00:07:15.012 19:05:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59496 ']' 00:07:15.012 19:05:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:15.012 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:15.012 19:05:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:15.012 19:05:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:15.012 19:05:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:15.012 19:05:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:15.012 [2024-11-27 19:05:24.536902] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:07:15.012 [2024-11-27 19:05:24.537138] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59496 ] 00:07:15.270 [2024-11-27 19:05:24.716432] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:15.270 [2024-11-27 19:05:24.861023] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:15.271 [2024-11-27 19:05:24.861123] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.271 [2024-11-27 19:05:24.861164] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:16.646 19:05:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:16.646 19:05:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:16.646 19:05:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:16.646 19:05:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=59518 00:07:16.646 19:05:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 59518 /var/tmp/spdk2.sock 00:07:16.646 19:05:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:07:16.646 19:05:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59518 /var/tmp/spdk2.sock 00:07:16.646 19:05:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:07:16.646 19:05:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:16.646 19:05:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:07:16.646 19:05:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:16.646 19:05:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59518 /var/tmp/spdk2.sock 00:07:16.646 19:05:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59518 ']' 00:07:16.646 19:05:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:16.646 19:05:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:16.646 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:16.646 19:05:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:16.646 19:05:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:16.646 19:05:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:16.646 [2024-11-27 19:05:25.948802] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:07:16.646 [2024-11-27 19:05:25.949021] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59518 ] 00:07:16.646 [2024-11-27 19:05:26.118345] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59496 has claimed it. 00:07:16.646 [2024-11-27 19:05:26.118411] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:17.212 ERROR: process (pid: 59518) is no longer running 00:07:17.212 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59518) - No such process 00:07:17.212 19:05:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:17.212 19:05:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:07:17.212 19:05:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:07:17.212 19:05:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:17.212 19:05:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:17.212 19:05:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:17.212 19:05:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:17.212 19:05:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:17.212 19:05:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:17.212 19:05:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:17.212 19:05:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 59496 00:07:17.212 19:05:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 59496 ']' 00:07:17.212 19:05:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 59496 00:07:17.212 19:05:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:07:17.212 19:05:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:17.212 19:05:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59496 00:07:17.212 19:05:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:17.212 19:05:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:17.212 19:05:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59496' 00:07:17.212 killing process with pid 59496 00:07:17.212 19:05:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 59496 00:07:17.212 19:05:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 59496 00:07:19.764 00:07:19.764 real 0m4.906s 00:07:19.764 user 0m13.127s 00:07:19.764 sys 0m0.767s 00:07:19.764 19:05:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:19.764 19:05:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:19.764 ************************************ 00:07:19.764 END TEST locking_overlapped_coremask 00:07:19.764 ************************************ 00:07:19.764 19:05:29 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:19.764 19:05:29 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:19.764 19:05:29 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:19.764 19:05:29 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:19.764 ************************************ 00:07:19.764 START TEST locking_overlapped_coremask_via_rpc 00:07:19.764 ************************************ 00:07:19.764 19:05:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:07:19.764 19:05:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=59589 00:07:19.764 19:05:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:19.765 19:05:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 59589 /var/tmp/spdk.sock 00:07:20.024 19:05:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59589 ']' 00:07:20.024 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:20.024 19:05:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:20.024 19:05:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:20.024 19:05:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:20.024 19:05:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:20.024 19:05:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:20.024 [2024-11-27 19:05:29.504421] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:07:20.024 [2024-11-27 19:05:29.504640] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59589 ] 00:07:20.283 [2024-11-27 19:05:29.685412] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:20.283 [2024-11-27 19:05:29.685488] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:20.283 [2024-11-27 19:05:29.834612] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:20.283 [2024-11-27 19:05:29.834874] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.283 [2024-11-27 19:05:29.834919] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:21.221 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:21.221 19:05:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:21.221 19:05:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:21.221 19:05:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59607 00:07:21.221 19:05:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59607 /var/tmp/spdk2.sock 00:07:21.221 19:05:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59607 ']' 00:07:21.221 19:05:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:21.221 19:05:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:21.221 19:05:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:21.221 19:05:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:21.221 19:05:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:21.221 19:05:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:21.480 [2024-11-27 19:05:30.961818] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:07:21.480 [2024-11-27 19:05:30.962062] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59607 ] 00:07:21.739 [2024-11-27 19:05:31.142937] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:21.739 [2024-11-27 19:05:31.142997] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:21.998 [2024-11-27 19:05:31.410909] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:21.998 [2024-11-27 19:05:31.413814] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:21.998 [2024-11-27 19:05:31.413849] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:23.899 19:05:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:23.899 19:05:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:23.899 19:05:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:23.899 19:05:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.899 19:05:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:23.899 19:05:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.899 19:05:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:23.899 19:05:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:07:23.899 19:05:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:23.899 19:05:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:07:23.899 19:05:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:23.899 19:05:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:07:23.899 19:05:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:23.899 19:05:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:23.899 19:05:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.899 19:05:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:23.899 [2024-11-27 19:05:33.533001] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59589 has claimed it. 00:07:24.157 request: 00:07:24.157 { 00:07:24.157 "method": "framework_enable_cpumask_locks", 00:07:24.157 "req_id": 1 00:07:24.157 } 00:07:24.157 Got JSON-RPC error response 00:07:24.157 response: 00:07:24.157 { 00:07:24.157 "code": -32603, 00:07:24.157 "message": "Failed to claim CPU core: 2" 00:07:24.157 } 00:07:24.157 19:05:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:24.157 19:05:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:07:24.157 19:05:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:24.157 19:05:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:24.157 19:05:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:24.157 19:05:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 59589 /var/tmp/spdk.sock 00:07:24.157 19:05:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59589 ']' 00:07:24.157 19:05:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:24.157 19:05:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:24.157 19:05:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:24.157 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:24.157 19:05:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:24.157 19:05:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:24.157 19:05:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:24.157 19:05:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:24.157 19:05:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59607 /var/tmp/spdk2.sock 00:07:24.157 19:05:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59607 ']' 00:07:24.157 19:05:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:24.157 19:05:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:24.157 19:05:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:24.157 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:24.157 19:05:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:24.157 19:05:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:24.416 19:05:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:24.416 19:05:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:24.416 19:05:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:24.416 19:05:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:24.416 19:05:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:24.416 19:05:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:24.416 00:07:24.416 real 0m4.573s 00:07:24.416 user 0m1.260s 00:07:24.416 sys 0m0.239s 00:07:24.416 19:05:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:24.416 19:05:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:24.416 ************************************ 00:07:24.416 END TEST locking_overlapped_coremask_via_rpc 00:07:24.416 ************************************ 00:07:24.416 19:05:34 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:24.416 19:05:34 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59589 ]] 00:07:24.416 19:05:34 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59589 00:07:24.416 19:05:34 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59589 ']' 00:07:24.416 19:05:34 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59589 00:07:24.416 19:05:34 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:07:24.416 19:05:34 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:24.416 19:05:34 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59589 00:07:24.675 19:05:34 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:24.675 19:05:34 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:24.675 19:05:34 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59589' 00:07:24.675 killing process with pid 59589 00:07:24.675 19:05:34 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59589 00:07:24.675 19:05:34 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59589 00:07:27.208 19:05:36 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59607 ]] 00:07:27.208 19:05:36 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59607 00:07:27.208 19:05:36 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59607 ']' 00:07:27.208 19:05:36 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59607 00:07:27.208 19:05:36 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:07:27.208 19:05:36 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:27.208 19:05:36 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59607 00:07:27.208 killing process with pid 59607 00:07:27.208 19:05:36 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:07:27.208 19:05:36 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:07:27.208 19:05:36 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59607' 00:07:27.208 19:05:36 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59607 00:07:27.208 19:05:36 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59607 00:07:29.759 19:05:39 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:29.759 Process with pid 59589 is not found 00:07:29.759 Process with pid 59607 is not found 00:07:29.759 19:05:39 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:29.759 19:05:39 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59589 ]] 00:07:29.759 19:05:39 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59589 00:07:29.759 19:05:39 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59589 ']' 00:07:29.759 19:05:39 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59589 00:07:29.759 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59589) - No such process 00:07:29.759 19:05:39 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59589 is not found' 00:07:29.759 19:05:39 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59607 ]] 00:07:29.759 19:05:39 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59607 00:07:29.759 19:05:39 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59607 ']' 00:07:29.759 19:05:39 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59607 00:07:29.759 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59607) - No such process 00:07:29.759 19:05:39 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59607 is not found' 00:07:29.759 19:05:39 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:29.759 ************************************ 00:07:29.759 END TEST cpu_locks 00:07:29.759 ************************************ 00:07:29.759 00:07:29.759 real 0m53.861s 00:07:29.759 user 1m29.637s 00:07:29.759 sys 0m8.302s 00:07:29.759 19:05:39 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:29.759 19:05:39 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:29.759 ************************************ 00:07:29.759 END TEST event 00:07:29.759 ************************************ 00:07:29.759 00:07:29.759 real 1m25.891s 00:07:29.759 user 2m32.647s 00:07:29.759 sys 0m12.772s 00:07:29.759 19:05:39 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:29.759 19:05:39 event -- common/autotest_common.sh@10 -- # set +x 00:07:29.759 19:05:39 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:29.759 19:05:39 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:29.759 19:05:39 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:29.759 19:05:39 -- common/autotest_common.sh@10 -- # set +x 00:07:29.759 ************************************ 00:07:29.759 START TEST thread 00:07:29.759 ************************************ 00:07:29.759 19:05:39 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:30.018 * Looking for test storage... 00:07:30.018 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:07:30.018 19:05:39 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:30.018 19:05:39 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:07:30.018 19:05:39 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:30.018 19:05:39 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:30.018 19:05:39 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:30.018 19:05:39 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:30.018 19:05:39 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:30.018 19:05:39 thread -- scripts/common.sh@336 -- # IFS=.-: 00:07:30.018 19:05:39 thread -- scripts/common.sh@336 -- # read -ra ver1 00:07:30.018 19:05:39 thread -- scripts/common.sh@337 -- # IFS=.-: 00:07:30.018 19:05:39 thread -- scripts/common.sh@337 -- # read -ra ver2 00:07:30.018 19:05:39 thread -- scripts/common.sh@338 -- # local 'op=<' 00:07:30.018 19:05:39 thread -- scripts/common.sh@340 -- # ver1_l=2 00:07:30.018 19:05:39 thread -- scripts/common.sh@341 -- # ver2_l=1 00:07:30.018 19:05:39 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:30.018 19:05:39 thread -- scripts/common.sh@344 -- # case "$op" in 00:07:30.018 19:05:39 thread -- scripts/common.sh@345 -- # : 1 00:07:30.018 19:05:39 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:30.018 19:05:39 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:30.018 19:05:39 thread -- scripts/common.sh@365 -- # decimal 1 00:07:30.018 19:05:39 thread -- scripts/common.sh@353 -- # local d=1 00:07:30.018 19:05:39 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:30.018 19:05:39 thread -- scripts/common.sh@355 -- # echo 1 00:07:30.018 19:05:39 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:07:30.018 19:05:39 thread -- scripts/common.sh@366 -- # decimal 2 00:07:30.018 19:05:39 thread -- scripts/common.sh@353 -- # local d=2 00:07:30.018 19:05:39 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:30.018 19:05:39 thread -- scripts/common.sh@355 -- # echo 2 00:07:30.018 19:05:39 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:07:30.018 19:05:39 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:30.018 19:05:39 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:30.018 19:05:39 thread -- scripts/common.sh@368 -- # return 0 00:07:30.018 19:05:39 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:30.018 19:05:39 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:30.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.018 --rc genhtml_branch_coverage=1 00:07:30.018 --rc genhtml_function_coverage=1 00:07:30.018 --rc genhtml_legend=1 00:07:30.018 --rc geninfo_all_blocks=1 00:07:30.018 --rc geninfo_unexecuted_blocks=1 00:07:30.018 00:07:30.018 ' 00:07:30.018 19:05:39 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:30.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.018 --rc genhtml_branch_coverage=1 00:07:30.018 --rc genhtml_function_coverage=1 00:07:30.018 --rc genhtml_legend=1 00:07:30.018 --rc geninfo_all_blocks=1 00:07:30.018 --rc geninfo_unexecuted_blocks=1 00:07:30.018 00:07:30.018 ' 00:07:30.018 19:05:39 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:30.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.018 --rc genhtml_branch_coverage=1 00:07:30.018 --rc genhtml_function_coverage=1 00:07:30.018 --rc genhtml_legend=1 00:07:30.018 --rc geninfo_all_blocks=1 00:07:30.018 --rc geninfo_unexecuted_blocks=1 00:07:30.018 00:07:30.018 ' 00:07:30.018 19:05:39 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:30.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.018 --rc genhtml_branch_coverage=1 00:07:30.018 --rc genhtml_function_coverage=1 00:07:30.018 --rc genhtml_legend=1 00:07:30.018 --rc geninfo_all_blocks=1 00:07:30.018 --rc geninfo_unexecuted_blocks=1 00:07:30.018 00:07:30.018 ' 00:07:30.018 19:05:39 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:30.018 19:05:39 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:07:30.018 19:05:39 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:30.018 19:05:39 thread -- common/autotest_common.sh@10 -- # set +x 00:07:30.018 ************************************ 00:07:30.018 START TEST thread_poller_perf 00:07:30.018 ************************************ 00:07:30.018 19:05:39 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:30.018 [2024-11-27 19:05:39.580531] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:07:30.018 [2024-11-27 19:05:39.580740] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59809 ] 00:07:30.277 [2024-11-27 19:05:39.755299] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.277 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:30.277 [2024-11-27 19:05:39.894987] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.651 [2024-11-27T19:05:41.287Z] ====================================== 00:07:31.651 [2024-11-27T19:05:41.287Z] busy:2300009334 (cyc) 00:07:31.651 [2024-11-27T19:05:41.287Z] total_run_count: 411000 00:07:31.651 [2024-11-27T19:05:41.287Z] tsc_hz: 2290000000 (cyc) 00:07:31.651 [2024-11-27T19:05:41.287Z] ====================================== 00:07:31.651 [2024-11-27T19:05:41.287Z] poller_cost: 5596 (cyc), 2443 (nsec) 00:07:31.651 00:07:31.651 real 0m1.610s 00:07:31.651 user 0m1.391s 00:07:31.651 sys 0m0.111s 00:07:31.651 19:05:41 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:31.651 19:05:41 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:31.651 ************************************ 00:07:31.651 END TEST thread_poller_perf 00:07:31.651 ************************************ 00:07:31.651 19:05:41 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:31.651 19:05:41 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:07:31.651 19:05:41 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:31.651 19:05:41 thread -- common/autotest_common.sh@10 -- # set +x 00:07:31.651 ************************************ 00:07:31.651 START TEST thread_poller_perf 00:07:31.651 ************************************ 00:07:31.651 19:05:41 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:31.651 [2024-11-27 19:05:41.256360] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:07:31.651 [2024-11-27 19:05:41.256476] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59845 ] 00:07:31.909 [2024-11-27 19:05:41.429367] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.167 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:32.167 [2024-11-27 19:05:41.571374] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.542 [2024-11-27T19:05:43.179Z] ====================================== 00:07:33.543 [2024-11-27T19:05:43.179Z] busy:2293330326 (cyc) 00:07:33.543 [2024-11-27T19:05:43.179Z] total_run_count: 5237000 00:07:33.543 [2024-11-27T19:05:43.179Z] tsc_hz: 2290000000 (cyc) 00:07:33.543 [2024-11-27T19:05:43.179Z] ====================================== 00:07:33.543 [2024-11-27T19:05:43.179Z] poller_cost: 437 (cyc), 190 (nsec) 00:07:33.543 00:07:33.543 real 0m1.605s 00:07:33.543 user 0m1.388s 00:07:33.543 sys 0m0.111s 00:07:33.543 19:05:42 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:33.543 ************************************ 00:07:33.543 END TEST thread_poller_perf 00:07:33.543 ************************************ 00:07:33.543 19:05:42 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:33.543 19:05:42 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:33.543 00:07:33.543 real 0m3.560s 00:07:33.543 user 0m2.935s 00:07:33.543 sys 0m0.429s 00:07:33.543 ************************************ 00:07:33.543 END TEST thread 00:07:33.543 ************************************ 00:07:33.543 19:05:42 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:33.543 19:05:42 thread -- common/autotest_common.sh@10 -- # set +x 00:07:33.543 19:05:42 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:07:33.543 19:05:42 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:33.543 19:05:42 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:33.543 19:05:42 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:33.543 19:05:42 -- common/autotest_common.sh@10 -- # set +x 00:07:33.543 ************************************ 00:07:33.543 START TEST app_cmdline 00:07:33.543 ************************************ 00:07:33.543 19:05:42 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:33.543 * Looking for test storage... 00:07:33.543 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:33.543 19:05:43 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:33.543 19:05:43 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:33.543 19:05:43 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:07:33.543 19:05:43 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:33.543 19:05:43 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:33.543 19:05:43 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:33.543 19:05:43 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:33.543 19:05:43 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:07:33.543 19:05:43 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:07:33.543 19:05:43 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:07:33.543 19:05:43 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:07:33.543 19:05:43 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:07:33.543 19:05:43 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:07:33.543 19:05:43 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:07:33.543 19:05:43 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:33.543 19:05:43 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:07:33.543 19:05:43 app_cmdline -- scripts/common.sh@345 -- # : 1 00:07:33.543 19:05:43 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:33.543 19:05:43 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:33.543 19:05:43 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:07:33.543 19:05:43 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:07:33.543 19:05:43 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:33.543 19:05:43 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:07:33.543 19:05:43 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:07:33.543 19:05:43 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:07:33.543 19:05:43 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:07:33.543 19:05:43 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:33.543 19:05:43 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:07:33.543 19:05:43 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:07:33.543 19:05:43 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:33.543 19:05:43 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:33.543 19:05:43 app_cmdline -- scripts/common.sh@368 -- # return 0 00:07:33.543 19:05:43 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:33.543 19:05:43 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:33.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:33.543 --rc genhtml_branch_coverage=1 00:07:33.543 --rc genhtml_function_coverage=1 00:07:33.543 --rc genhtml_legend=1 00:07:33.543 --rc geninfo_all_blocks=1 00:07:33.543 --rc geninfo_unexecuted_blocks=1 00:07:33.543 00:07:33.543 ' 00:07:33.543 19:05:43 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:33.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:33.543 --rc genhtml_branch_coverage=1 00:07:33.543 --rc genhtml_function_coverage=1 00:07:33.543 --rc genhtml_legend=1 00:07:33.543 --rc geninfo_all_blocks=1 00:07:33.543 --rc geninfo_unexecuted_blocks=1 00:07:33.543 00:07:33.543 ' 00:07:33.543 19:05:43 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:33.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:33.543 --rc genhtml_branch_coverage=1 00:07:33.543 --rc genhtml_function_coverage=1 00:07:33.543 --rc genhtml_legend=1 00:07:33.543 --rc geninfo_all_blocks=1 00:07:33.543 --rc geninfo_unexecuted_blocks=1 00:07:33.543 00:07:33.543 ' 00:07:33.543 19:05:43 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:33.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:33.543 --rc genhtml_branch_coverage=1 00:07:33.543 --rc genhtml_function_coverage=1 00:07:33.543 --rc genhtml_legend=1 00:07:33.543 --rc geninfo_all_blocks=1 00:07:33.543 --rc geninfo_unexecuted_blocks=1 00:07:33.543 00:07:33.543 ' 00:07:33.543 19:05:43 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:33.543 19:05:43 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=59936 00:07:33.543 19:05:43 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:33.543 19:05:43 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 59936 00:07:33.543 19:05:43 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 59936 ']' 00:07:33.543 19:05:43 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:33.543 19:05:43 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:33.543 19:05:43 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:33.543 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:33.543 19:05:43 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:33.543 19:05:43 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:33.802 [2024-11-27 19:05:43.262156] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:07:33.802 [2024-11-27 19:05:43.262290] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59936 ] 00:07:34.061 [2024-11-27 19:05:43.437575] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.061 [2024-11-27 19:05:43.577178] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.996 19:05:44 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:34.996 19:05:44 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:07:34.996 19:05:44 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:07:35.254 { 00:07:35.254 "version": "SPDK v25.01-pre git sha1 35cd3e84d", 00:07:35.254 "fields": { 00:07:35.254 "major": 25, 00:07:35.254 "minor": 1, 00:07:35.254 "patch": 0, 00:07:35.254 "suffix": "-pre", 00:07:35.254 "commit": "35cd3e84d" 00:07:35.254 } 00:07:35.254 } 00:07:35.254 19:05:44 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:35.254 19:05:44 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:35.254 19:05:44 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:35.254 19:05:44 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:35.254 19:05:44 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:35.254 19:05:44 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:35.254 19:05:44 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:35.254 19:05:44 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.254 19:05:44 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:35.254 19:05:44 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.254 19:05:44 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:35.254 19:05:44 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:35.254 19:05:44 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:35.254 19:05:44 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:07:35.254 19:05:44 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:35.254 19:05:44 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:35.254 19:05:44 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:35.254 19:05:44 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:35.254 19:05:44 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:35.254 19:05:44 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:35.254 19:05:44 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:35.254 19:05:44 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:35.254 19:05:44 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:35.254 19:05:44 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:35.512 request: 00:07:35.512 { 00:07:35.512 "method": "env_dpdk_get_mem_stats", 00:07:35.512 "req_id": 1 00:07:35.512 } 00:07:35.512 Got JSON-RPC error response 00:07:35.512 response: 00:07:35.512 { 00:07:35.512 "code": -32601, 00:07:35.512 "message": "Method not found" 00:07:35.512 } 00:07:35.512 19:05:45 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:07:35.512 19:05:45 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:35.512 19:05:45 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:35.512 19:05:45 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:35.512 19:05:45 app_cmdline -- app/cmdline.sh@1 -- # killprocess 59936 00:07:35.512 19:05:45 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 59936 ']' 00:07:35.512 19:05:45 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 59936 00:07:35.512 19:05:45 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:07:35.512 19:05:45 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:35.512 19:05:45 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59936 00:07:35.512 killing process with pid 59936 00:07:35.512 19:05:45 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:35.512 19:05:45 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:35.512 19:05:45 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59936' 00:07:35.512 19:05:45 app_cmdline -- common/autotest_common.sh@973 -- # kill 59936 00:07:35.512 19:05:45 app_cmdline -- common/autotest_common.sh@978 -- # wait 59936 00:07:38.040 00:07:38.040 real 0m4.712s 00:07:38.040 user 0m4.751s 00:07:38.040 sys 0m0.761s 00:07:38.040 19:05:47 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:38.040 ************************************ 00:07:38.040 END TEST app_cmdline 00:07:38.040 ************************************ 00:07:38.040 19:05:47 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:38.299 19:05:47 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:38.299 19:05:47 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:38.299 19:05:47 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:38.299 19:05:47 -- common/autotest_common.sh@10 -- # set +x 00:07:38.299 ************************************ 00:07:38.299 START TEST version 00:07:38.299 ************************************ 00:07:38.299 19:05:47 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:38.299 * Looking for test storage... 00:07:38.299 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:38.299 19:05:47 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:38.299 19:05:47 version -- common/autotest_common.sh@1693 -- # lcov --version 00:07:38.299 19:05:47 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:38.299 19:05:47 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:38.299 19:05:47 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:38.299 19:05:47 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:38.299 19:05:47 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:38.299 19:05:47 version -- scripts/common.sh@336 -- # IFS=.-: 00:07:38.299 19:05:47 version -- scripts/common.sh@336 -- # read -ra ver1 00:07:38.299 19:05:47 version -- scripts/common.sh@337 -- # IFS=.-: 00:07:38.299 19:05:47 version -- scripts/common.sh@337 -- # read -ra ver2 00:07:38.299 19:05:47 version -- scripts/common.sh@338 -- # local 'op=<' 00:07:38.299 19:05:47 version -- scripts/common.sh@340 -- # ver1_l=2 00:07:38.299 19:05:47 version -- scripts/common.sh@341 -- # ver2_l=1 00:07:38.299 19:05:47 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:38.299 19:05:47 version -- scripts/common.sh@344 -- # case "$op" in 00:07:38.299 19:05:47 version -- scripts/common.sh@345 -- # : 1 00:07:38.299 19:05:47 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:38.299 19:05:47 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:38.299 19:05:47 version -- scripts/common.sh@365 -- # decimal 1 00:07:38.299 19:05:47 version -- scripts/common.sh@353 -- # local d=1 00:07:38.299 19:05:47 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:38.299 19:05:47 version -- scripts/common.sh@355 -- # echo 1 00:07:38.299 19:05:47 version -- scripts/common.sh@365 -- # ver1[v]=1 00:07:38.299 19:05:47 version -- scripts/common.sh@366 -- # decimal 2 00:07:38.299 19:05:47 version -- scripts/common.sh@353 -- # local d=2 00:07:38.299 19:05:47 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:38.299 19:05:47 version -- scripts/common.sh@355 -- # echo 2 00:07:38.299 19:05:47 version -- scripts/common.sh@366 -- # ver2[v]=2 00:07:38.299 19:05:47 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:38.299 19:05:47 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:38.300 19:05:47 version -- scripts/common.sh@368 -- # return 0 00:07:38.300 19:05:47 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:38.300 19:05:47 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:38.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:38.300 --rc genhtml_branch_coverage=1 00:07:38.300 --rc genhtml_function_coverage=1 00:07:38.300 --rc genhtml_legend=1 00:07:38.300 --rc geninfo_all_blocks=1 00:07:38.300 --rc geninfo_unexecuted_blocks=1 00:07:38.300 00:07:38.300 ' 00:07:38.300 19:05:47 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:38.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:38.300 --rc genhtml_branch_coverage=1 00:07:38.300 --rc genhtml_function_coverage=1 00:07:38.300 --rc genhtml_legend=1 00:07:38.300 --rc geninfo_all_blocks=1 00:07:38.300 --rc geninfo_unexecuted_blocks=1 00:07:38.300 00:07:38.300 ' 00:07:38.300 19:05:47 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:38.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:38.300 --rc genhtml_branch_coverage=1 00:07:38.300 --rc genhtml_function_coverage=1 00:07:38.300 --rc genhtml_legend=1 00:07:38.300 --rc geninfo_all_blocks=1 00:07:38.300 --rc geninfo_unexecuted_blocks=1 00:07:38.300 00:07:38.300 ' 00:07:38.300 19:05:47 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:38.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:38.300 --rc genhtml_branch_coverage=1 00:07:38.300 --rc genhtml_function_coverage=1 00:07:38.300 --rc genhtml_legend=1 00:07:38.300 --rc geninfo_all_blocks=1 00:07:38.300 --rc geninfo_unexecuted_blocks=1 00:07:38.300 00:07:38.300 ' 00:07:38.300 19:05:47 version -- app/version.sh@17 -- # get_header_version major 00:07:38.559 19:05:47 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:38.559 19:05:47 version -- app/version.sh@14 -- # cut -f2 00:07:38.559 19:05:47 version -- app/version.sh@14 -- # tr -d '"' 00:07:38.559 19:05:47 version -- app/version.sh@17 -- # major=25 00:07:38.559 19:05:47 version -- app/version.sh@18 -- # get_header_version minor 00:07:38.559 19:05:47 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:38.559 19:05:47 version -- app/version.sh@14 -- # cut -f2 00:07:38.559 19:05:47 version -- app/version.sh@14 -- # tr -d '"' 00:07:38.559 19:05:47 version -- app/version.sh@18 -- # minor=1 00:07:38.559 19:05:47 version -- app/version.sh@19 -- # get_header_version patch 00:07:38.559 19:05:47 version -- app/version.sh@14 -- # cut -f2 00:07:38.559 19:05:47 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:38.559 19:05:47 version -- app/version.sh@14 -- # tr -d '"' 00:07:38.559 19:05:47 version -- app/version.sh@19 -- # patch=0 00:07:38.559 19:05:47 version -- app/version.sh@20 -- # get_header_version suffix 00:07:38.559 19:05:47 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:38.559 19:05:47 version -- app/version.sh@14 -- # cut -f2 00:07:38.559 19:05:47 version -- app/version.sh@14 -- # tr -d '"' 00:07:38.559 19:05:47 version -- app/version.sh@20 -- # suffix=-pre 00:07:38.559 19:05:47 version -- app/version.sh@22 -- # version=25.1 00:07:38.559 19:05:47 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:38.559 19:05:47 version -- app/version.sh@28 -- # version=25.1rc0 00:07:38.559 19:05:47 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:38.559 19:05:47 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:38.559 19:05:48 version -- app/version.sh@30 -- # py_version=25.1rc0 00:07:38.559 19:05:48 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:07:38.559 ************************************ 00:07:38.559 END TEST version 00:07:38.559 ************************************ 00:07:38.559 00:07:38.559 real 0m0.311s 00:07:38.559 user 0m0.181s 00:07:38.559 sys 0m0.183s 00:07:38.559 19:05:48 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:38.559 19:05:48 version -- common/autotest_common.sh@10 -- # set +x 00:07:38.559 19:05:48 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:07:38.559 19:05:48 -- spdk/autotest.sh@188 -- # [[ 1 -eq 1 ]] 00:07:38.559 19:05:48 -- spdk/autotest.sh@189 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:07:38.559 19:05:48 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:38.559 19:05:48 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:38.559 19:05:48 -- common/autotest_common.sh@10 -- # set +x 00:07:38.559 ************************************ 00:07:38.559 START TEST bdev_raid 00:07:38.559 ************************************ 00:07:38.559 19:05:48 bdev_raid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:07:38.559 * Looking for test storage... 00:07:38.819 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:07:38.819 19:05:48 bdev_raid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:38.819 19:05:48 bdev_raid -- common/autotest_common.sh@1693 -- # lcov --version 00:07:38.819 19:05:48 bdev_raid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:38.819 19:05:48 bdev_raid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:38.820 19:05:48 bdev_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:38.820 19:05:48 bdev_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:38.820 19:05:48 bdev_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:38.820 19:05:48 bdev_raid -- scripts/common.sh@336 -- # IFS=.-: 00:07:38.820 19:05:48 bdev_raid -- scripts/common.sh@336 -- # read -ra ver1 00:07:38.820 19:05:48 bdev_raid -- scripts/common.sh@337 -- # IFS=.-: 00:07:38.820 19:05:48 bdev_raid -- scripts/common.sh@337 -- # read -ra ver2 00:07:38.820 19:05:48 bdev_raid -- scripts/common.sh@338 -- # local 'op=<' 00:07:38.820 19:05:48 bdev_raid -- scripts/common.sh@340 -- # ver1_l=2 00:07:38.820 19:05:48 bdev_raid -- scripts/common.sh@341 -- # ver2_l=1 00:07:38.820 19:05:48 bdev_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:38.820 19:05:48 bdev_raid -- scripts/common.sh@344 -- # case "$op" in 00:07:38.820 19:05:48 bdev_raid -- scripts/common.sh@345 -- # : 1 00:07:38.820 19:05:48 bdev_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:38.820 19:05:48 bdev_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:38.820 19:05:48 bdev_raid -- scripts/common.sh@365 -- # decimal 1 00:07:38.820 19:05:48 bdev_raid -- scripts/common.sh@353 -- # local d=1 00:07:38.820 19:05:48 bdev_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:38.820 19:05:48 bdev_raid -- scripts/common.sh@355 -- # echo 1 00:07:38.820 19:05:48 bdev_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:07:38.820 19:05:48 bdev_raid -- scripts/common.sh@366 -- # decimal 2 00:07:38.820 19:05:48 bdev_raid -- scripts/common.sh@353 -- # local d=2 00:07:38.820 19:05:48 bdev_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:38.820 19:05:48 bdev_raid -- scripts/common.sh@355 -- # echo 2 00:07:38.820 19:05:48 bdev_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:07:38.820 19:05:48 bdev_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:38.820 19:05:48 bdev_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:38.820 19:05:48 bdev_raid -- scripts/common.sh@368 -- # return 0 00:07:38.820 19:05:48 bdev_raid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:38.820 19:05:48 bdev_raid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:38.820 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:38.820 --rc genhtml_branch_coverage=1 00:07:38.820 --rc genhtml_function_coverage=1 00:07:38.820 --rc genhtml_legend=1 00:07:38.820 --rc geninfo_all_blocks=1 00:07:38.820 --rc geninfo_unexecuted_blocks=1 00:07:38.820 00:07:38.820 ' 00:07:38.820 19:05:48 bdev_raid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:38.820 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:38.820 --rc genhtml_branch_coverage=1 00:07:38.820 --rc genhtml_function_coverage=1 00:07:38.820 --rc genhtml_legend=1 00:07:38.820 --rc geninfo_all_blocks=1 00:07:38.820 --rc geninfo_unexecuted_blocks=1 00:07:38.820 00:07:38.820 ' 00:07:38.820 19:05:48 bdev_raid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:38.820 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:38.820 --rc genhtml_branch_coverage=1 00:07:38.820 --rc genhtml_function_coverage=1 00:07:38.820 --rc genhtml_legend=1 00:07:38.820 --rc geninfo_all_blocks=1 00:07:38.820 --rc geninfo_unexecuted_blocks=1 00:07:38.820 00:07:38.820 ' 00:07:38.820 19:05:48 bdev_raid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:38.820 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:38.820 --rc genhtml_branch_coverage=1 00:07:38.820 --rc genhtml_function_coverage=1 00:07:38.820 --rc genhtml_legend=1 00:07:38.820 --rc geninfo_all_blocks=1 00:07:38.820 --rc geninfo_unexecuted_blocks=1 00:07:38.820 00:07:38.820 ' 00:07:38.820 19:05:48 bdev_raid -- bdev/bdev_raid.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:07:38.820 19:05:48 bdev_raid -- bdev/nbd_common.sh@6 -- # set -e 00:07:38.820 19:05:48 bdev_raid -- bdev/bdev_raid.sh@14 -- # rpc_py=rpc_cmd 00:07:38.820 19:05:48 bdev_raid -- bdev/bdev_raid.sh@946 -- # mkdir -p /raidtest 00:07:38.820 19:05:48 bdev_raid -- bdev/bdev_raid.sh@947 -- # trap 'cleanup; exit 1' EXIT 00:07:38.820 19:05:48 bdev_raid -- bdev/bdev_raid.sh@949 -- # base_blocklen=512 00:07:38.820 19:05:48 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid1_resize_data_offset_test raid_resize_data_offset_test 00:07:38.820 19:05:48 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:38.820 19:05:48 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:38.820 19:05:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:38.820 ************************************ 00:07:38.820 START TEST raid1_resize_data_offset_test 00:07:38.820 ************************************ 00:07:38.820 19:05:48 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1129 -- # raid_resize_data_offset_test 00:07:38.820 19:05:48 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@917 -- # raid_pid=60129 00:07:38.820 Process raid pid: 60129 00:07:38.820 19:05:48 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@918 -- # echo 'Process raid pid: 60129' 00:07:38.820 19:05:48 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@916 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:38.820 19:05:48 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@919 -- # waitforlisten 60129 00:07:38.820 19:05:48 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@835 -- # '[' -z 60129 ']' 00:07:38.820 19:05:48 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:38.820 19:05:48 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:38.820 19:05:48 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:38.820 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:38.820 19:05:48 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:38.820 19:05:48 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.820 [2024-11-27 19:05:48.436888] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:07:38.820 [2024-11-27 19:05:48.437132] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:39.079 [2024-11-27 19:05:48.613632] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.338 [2024-11-27 19:05:48.753177] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.597 [2024-11-27 19:05:48.991937] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:39.597 [2024-11-27 19:05:48.992100] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:39.860 19:05:49 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:39.860 19:05:49 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@868 -- # return 0 00:07:39.860 19:05:49 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@922 -- # rpc_cmd bdev_malloc_create -b malloc0 64 512 -o 16 00:07:39.860 19:05:49 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.860 19:05:49 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.860 malloc0 00:07:39.860 19:05:49 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.860 19:05:49 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@923 -- # rpc_cmd bdev_malloc_create -b malloc1 64 512 -o 16 00:07:39.860 19:05:49 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.860 19:05:49 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.860 malloc1 00:07:39.860 19:05:49 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.860 19:05:49 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@924 -- # rpc_cmd bdev_null_create null0 64 512 00:07:39.860 19:05:49 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.860 19:05:49 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.860 null0 00:07:39.860 19:05:49 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.860 19:05:49 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@926 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''malloc0 malloc1 null0'\''' -s 00:07:39.860 19:05:49 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.860 19:05:49 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.860 [2024-11-27 19:05:49.474613] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc0 is claimed 00:07:39.860 [2024-11-27 19:05:49.476747] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:39.860 [2024-11-27 19:05:49.476800] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev null0 is claimed 00:07:39.860 [2024-11-27 19:05:49.476948] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:39.860 [2024-11-27 19:05:49.476963] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 129024, blocklen 512 00:07:39.860 [2024-11-27 19:05:49.477224] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:39.860 [2024-11-27 19:05:49.477403] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:39.860 [2024-11-27 19:05:49.477416] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:07:39.860 [2024-11-27 19:05:49.477576] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:39.860 19:05:49 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.860 19:05:49 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:39.860 19:05:49 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.860 19:05:49 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:07:39.860 19:05:49 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.118 19:05:49 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.118 19:05:49 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # (( 2048 == 2048 )) 00:07:40.118 19:05:49 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@931 -- # rpc_cmd bdev_null_delete null0 00:07:40.118 19:05:49 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.118 19:05:49 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.118 [2024-11-27 19:05:49.534573] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: null0 00:07:40.118 19:05:49 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.118 19:05:49 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@935 -- # rpc_cmd bdev_malloc_create -b malloc2 512 512 -o 30 00:07:40.118 19:05:49 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.118 19:05:49 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.685 malloc2 00:07:40.685 19:05:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.685 19:05:50 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@936 -- # rpc_cmd bdev_raid_add_base_bdev Raid malloc2 00:07:40.685 19:05:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.685 19:05:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.685 [2024-11-27 19:05:50.160218] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:40.685 [2024-11-27 19:05:50.178801] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:40.685 19:05:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.685 [2024-11-27 19:05:50.180909] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev Raid 00:07:40.685 19:05:50 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:40.685 19:05:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.685 19:05:50 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:07:40.685 19:05:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.685 19:05:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.685 19:05:50 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # (( 2070 == 2070 )) 00:07:40.685 19:05:50 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@941 -- # killprocess 60129 00:07:40.685 19:05:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@954 -- # '[' -z 60129 ']' 00:07:40.685 19:05:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@958 -- # kill -0 60129 00:07:40.685 19:05:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@959 -- # uname 00:07:40.685 19:05:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:40.685 19:05:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60129 00:07:40.685 19:05:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:40.685 19:05:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:40.685 19:05:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60129' 00:07:40.685 killing process with pid 60129 00:07:40.685 19:05:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@973 -- # kill 60129 00:07:40.685 [2024-11-27 19:05:50.273123] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:40.685 19:05:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@978 -- # wait 60129 00:07:40.685 [2024-11-27 19:05:50.273421] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev Raid: Operation canceled 00:07:40.685 [2024-11-27 19:05:50.273542] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:40.685 [2024-11-27 19:05:50.273562] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: malloc2 00:07:40.685 [2024-11-27 19:05:50.310152] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:40.685 [2024-11-27 19:05:50.310515] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:40.685 [2024-11-27 19:05:50.310533] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:07:43.214 [2024-11-27 19:05:52.237264] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:44.150 ************************************ 00:07:44.150 END TEST raid1_resize_data_offset_test 00:07:44.150 ************************************ 00:07:44.150 19:05:53 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@943 -- # return 0 00:07:44.150 00:07:44.150 real 0m5.097s 00:07:44.150 user 0m4.793s 00:07:44.150 sys 0m0.745s 00:07:44.150 19:05:53 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:44.150 19:05:53 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.150 19:05:53 bdev_raid -- bdev/bdev_raid.sh@953 -- # run_test raid0_resize_superblock_test raid_resize_superblock_test 0 00:07:44.150 19:05:53 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:44.150 19:05:53 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:44.150 19:05:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:44.150 ************************************ 00:07:44.150 START TEST raid0_resize_superblock_test 00:07:44.150 ************************************ 00:07:44.150 19:05:53 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1129 -- # raid_resize_superblock_test 0 00:07:44.150 19:05:53 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=0 00:07:44.150 19:05:53 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=60213 00:07:44.150 Process raid pid: 60213 00:07:44.150 19:05:53 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:44.150 19:05:53 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 60213' 00:07:44.150 19:05:53 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 60213 00:07:44.150 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:44.150 19:05:53 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 60213 ']' 00:07:44.150 19:05:53 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:44.150 19:05:53 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:44.150 19:05:53 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:44.150 19:05:53 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:44.150 19:05:53 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.150 [2024-11-27 19:05:53.612870] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:07:44.150 [2024-11-27 19:05:53.612990] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:44.409 [2024-11-27 19:05:53.794468] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.409 [2024-11-27 19:05:53.932256] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.668 [2024-11-27 19:05:54.168676] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:44.668 [2024-11-27 19:05:54.168733] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:44.927 19:05:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:44.927 19:05:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:07:44.927 19:05:54 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:07:44.927 19:05:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.927 19:05:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.494 malloc0 00:07:45.494 19:05:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.494 19:05:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:07:45.494 19:05:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.494 19:05:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.494 [2024-11-27 19:05:55.054220] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:07:45.494 [2024-11-27 19:05:55.054282] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:45.494 [2024-11-27 19:05:55.054308] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:45.494 [2024-11-27 19:05:55.054320] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:45.494 [2024-11-27 19:05:55.056758] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:45.494 [2024-11-27 19:05:55.056796] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:07:45.494 pt0 00:07:45.494 19:05:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.494 19:05:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:07:45.494 19:05:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.494 19:05:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.753 55fd9f04-56f1-4c18-b31e-be9128d1afa7 00:07:45.753 19:05:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.753 19:05:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:07:45.753 19:05:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.753 19:05:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.753 bc18a71e-ee1c-43b0-aa7c-0120d3ff92b4 00:07:45.753 19:05:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.753 19:05:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:07:45.753 19:05:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.753 19:05:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.753 c6906a6e-e638-4977-be76-5f3880d03d60 00:07:45.753 19:05:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.753 19:05:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:07:45.753 19:05:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@870 -- # rpc_cmd bdev_raid_create -n Raid -r 0 -z 64 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:07:45.753 19:05:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.753 19:05:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.753 [2024-11-27 19:05:55.263112] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev bc18a71e-ee1c-43b0-aa7c-0120d3ff92b4 is claimed 00:07:45.753 [2024-11-27 19:05:55.263207] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev c6906a6e-e638-4977-be76-5f3880d03d60 is claimed 00:07:45.753 [2024-11-27 19:05:55.263329] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:45.753 [2024-11-27 19:05:55.263345] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 245760, blocklen 512 00:07:45.753 [2024-11-27 19:05:55.263620] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:45.753 [2024-11-27 19:05:55.263824] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:45.753 [2024-11-27 19:05:55.263836] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:07:45.753 [2024-11-27 19:05:55.264003] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:45.753 19:05:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.753 19:05:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:07:45.753 19:05:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.753 19:05:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:07:45.753 19:05:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.753 19:05:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.753 19:05:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:07:45.753 19:05:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:07:45.753 19:05:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:07:45.753 19:05:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.753 19:05:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.753 19:05:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.753 19:05:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:07:45.753 19:05:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:45.753 19:05:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:45.753 19:05:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:45.753 19:05:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # jq '.[].num_blocks' 00:07:45.753 19:05:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.753 19:05:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.754 [2024-11-27 19:05:55.375127] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:46.013 19:05:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.013 19:05:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:46.013 19:05:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:46.013 19:05:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # (( 245760 == 245760 )) 00:07:46.013 19:05:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:07:46.013 19:05:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.013 19:05:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.013 [2024-11-27 19:05:55.415074] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:46.013 [2024-11-27 19:05:55.415103] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'bc18a71e-ee1c-43b0-aa7c-0120d3ff92b4' was resized: old size 131072, new size 204800 00:07:46.013 19:05:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.013 19:05:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:07:46.013 19:05:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.013 19:05:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.013 [2024-11-27 19:05:55.426962] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:46.013 [2024-11-27 19:05:55.426986] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'c6906a6e-e638-4977-be76-5f3880d03d60' was resized: old size 131072, new size 204800 00:07:46.013 [2024-11-27 19:05:55.427013] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 245760 to 393216 00:07:46.013 19:05:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.013 19:05:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:07:46.013 19:05:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.013 19:05:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.013 19:05:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:07:46.013 19:05:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.013 19:05:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:07:46.013 19:05:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:07:46.013 19:05:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:07:46.013 19:05:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.013 19:05:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.013 19:05:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.013 19:05:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:07:46.013 19:05:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:46.013 19:05:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:46.013 19:05:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.013 19:05:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.013 19:05:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:46.013 19:05:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # jq '.[].num_blocks' 00:07:46.013 [2024-11-27 19:05:55.538869] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:46.013 19:05:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.013 19:05:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:46.013 19:05:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:46.013 19:05:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # (( 393216 == 393216 )) 00:07:46.013 19:05:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:07:46.013 19:05:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.013 19:05:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.013 [2024-11-27 19:05:55.586586] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:07:46.013 [2024-11-27 19:05:55.586660] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:07:46.013 [2024-11-27 19:05:55.586676] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:46.013 [2024-11-27 19:05:55.586703] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:07:46.013 [2024-11-27 19:05:55.586825] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:46.013 [2024-11-27 19:05:55.586859] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:46.013 [2024-11-27 19:05:55.586872] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:07:46.013 19:05:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.013 19:05:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:07:46.013 19:05:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.013 19:05:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.013 [2024-11-27 19:05:55.598488] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:07:46.013 [2024-11-27 19:05:55.598536] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:46.013 [2024-11-27 19:05:55.598558] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:07:46.013 [2024-11-27 19:05:55.598570] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:46.013 [2024-11-27 19:05:55.601056] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:46.013 [2024-11-27 19:05:55.601093] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:07:46.013 [2024-11-27 19:05:55.602786] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev bc18a71e-ee1c-43b0-aa7c-0120d3ff92b4 00:07:46.013 [2024-11-27 19:05:55.602849] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev bc18a71e-ee1c-43b0-aa7c-0120d3ff92b4 is claimed 00:07:46.013 [2024-11-27 19:05:55.602947] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev c6906a6e-e638-4977-be76-5f3880d03d60 00:07:46.013 [2024-11-27 19:05:55.602965] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev c6906a6e-e638-4977-be76-5f3880d03d60 is claimed 00:07:46.013 [2024-11-27 19:05:55.603138] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev c6906a6e-e638-4977-be76-5f3880d03d60 (2) smaller than existing raid bdev Raid (3) 00:07:46.013 [2024-11-27 19:05:55.603169] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev bc18a71e-ee1c-43b0-aa7c-0120d3ff92b4: File exists 00:07:46.013 [2024-11-27 19:05:55.603204] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:07:46.013 [2024-11-27 19:05:55.603217] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 393216, blocklen 512 00:07:46.013 pt0 00:07:46.013 [2024-11-27 19:05:55.603485] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:07:46.013 [2024-11-27 19:05:55.603645] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:07:46.013 [2024-11-27 19:05:55.603653] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:07:46.013 [2024-11-27 19:05:55.603819] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:46.013 19:05:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.013 19:05:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:07:46.013 19:05:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.013 19:05:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.013 19:05:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.013 19:05:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:46.013 19:05:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:46.013 19:05:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.013 19:05:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.013 19:05:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:46.013 19:05:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # jq '.[].num_blocks' 00:07:46.013 [2024-11-27 19:05:55.623023] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:46.013 19:05:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.272 19:05:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:46.272 19:05:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:46.272 19:05:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # (( 393216 == 393216 )) 00:07:46.272 19:05:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 60213 00:07:46.272 19:05:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 60213 ']' 00:07:46.272 19:05:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@958 -- # kill -0 60213 00:07:46.272 19:05:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@959 -- # uname 00:07:46.272 19:05:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:46.272 19:05:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60213 00:07:46.272 killing process with pid 60213 00:07:46.273 19:05:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:46.273 19:05:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:46.273 19:05:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60213' 00:07:46.273 19:05:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@973 -- # kill 60213 00:07:46.273 [2024-11-27 19:05:55.713620] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:46.273 [2024-11-27 19:05:55.713684] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:46.273 [2024-11-27 19:05:55.713742] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:46.273 [2024-11-27 19:05:55.713751] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:07:46.273 19:05:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@978 -- # wait 60213 00:07:47.667 [2024-11-27 19:05:57.265007] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:49.057 19:05:58 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:07:49.057 00:07:49.057 real 0m4.954s 00:07:49.057 user 0m4.962s 00:07:49.057 sys 0m0.762s 00:07:49.057 ************************************ 00:07:49.057 END TEST raid0_resize_superblock_test 00:07:49.057 ************************************ 00:07:49.057 19:05:58 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:49.057 19:05:58 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.057 19:05:58 bdev_raid -- bdev/bdev_raid.sh@954 -- # run_test raid1_resize_superblock_test raid_resize_superblock_test 1 00:07:49.057 19:05:58 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:49.057 19:05:58 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:49.057 19:05:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:49.057 ************************************ 00:07:49.057 START TEST raid1_resize_superblock_test 00:07:49.057 ************************************ 00:07:49.057 19:05:58 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1129 -- # raid_resize_superblock_test 1 00:07:49.057 19:05:58 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=1 00:07:49.057 19:05:58 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=60317 00:07:49.057 19:05:58 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:49.057 19:05:58 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 60317' 00:07:49.057 Process raid pid: 60317 00:07:49.057 19:05:58 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 60317 00:07:49.057 19:05:58 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 60317 ']' 00:07:49.057 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:49.057 19:05:58 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:49.057 19:05:58 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:49.057 19:05:58 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:49.057 19:05:58 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:49.057 19:05:58 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.057 [2024-11-27 19:05:58.638007] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:07:49.057 [2024-11-27 19:05:58.638132] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:49.317 [2024-11-27 19:05:58.821450] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:49.576 [2024-11-27 19:05:58.960530] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.576 [2024-11-27 19:05:59.202337] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:49.576 [2024-11-27 19:05:59.202387] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:49.834 19:05:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:49.834 19:05:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:07:49.834 19:05:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:07:49.834 19:05:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.834 19:05:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.769 malloc0 00:07:50.769 19:06:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.769 19:06:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:07:50.769 19:06:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.769 19:06:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.769 [2024-11-27 19:06:00.068901] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:07:50.769 [2024-11-27 19:06:00.069014] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:50.769 [2024-11-27 19:06:00.069057] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:50.769 [2024-11-27 19:06:00.069119] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:50.769 [2024-11-27 19:06:00.071639] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:50.769 [2024-11-27 19:06:00.071743] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:07:50.769 pt0 00:07:50.769 19:06:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.769 19:06:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:07:50.769 19:06:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.769 19:06:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.769 8a086c64-1556-402a-9e25-87d08bcadb49 00:07:50.769 19:06:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.769 19:06:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:07:50.769 19:06:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.769 19:06:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.769 870114ee-3798-4ed3-9d4f-64efc24aa39d 00:07:50.769 19:06:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.769 19:06:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:07:50.769 19:06:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.770 19:06:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.770 8068f02e-eacf-40de-9c26-f44116909aa5 00:07:50.770 19:06:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.770 19:06:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:07:50.770 19:06:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@871 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:07:50.770 19:06:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.770 19:06:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.770 [2024-11-27 19:06:00.274852] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 870114ee-3798-4ed3-9d4f-64efc24aa39d is claimed 00:07:50.770 [2024-11-27 19:06:00.274988] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 8068f02e-eacf-40de-9c26-f44116909aa5 is claimed 00:07:50.770 [2024-11-27 19:06:00.275144] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:50.770 [2024-11-27 19:06:00.275163] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 122880, blocklen 512 00:07:50.770 [2024-11-27 19:06:00.275437] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:50.770 [2024-11-27 19:06:00.275630] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:50.770 [2024-11-27 19:06:00.275640] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:07:50.770 [2024-11-27 19:06:00.275814] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:50.770 19:06:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.770 19:06:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:07:50.770 19:06:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:07:50.770 19:06:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.770 19:06:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.770 19:06:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.770 19:06:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:07:50.770 19:06:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:07:50.770 19:06:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.770 19:06:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.770 19:06:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:07:50.770 19:06:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.770 19:06:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:07:50.770 19:06:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:50.770 19:06:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # jq '.[].num_blocks' 00:07:50.770 19:06:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:50.770 19:06:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:50.770 19:06:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.770 19:06:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.770 [2024-11-27 19:06:00.391046] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:51.029 19:06:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.029 19:06:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:51.029 19:06:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:51.029 19:06:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # (( 122880 == 122880 )) 00:07:51.029 19:06:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:07:51.029 19:06:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.029 19:06:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.029 [2024-11-27 19:06:00.419031] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:51.029 [2024-11-27 19:06:00.419057] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '870114ee-3798-4ed3-9d4f-64efc24aa39d' was resized: old size 131072, new size 204800 00:07:51.029 19:06:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.029 19:06:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:07:51.029 19:06:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.029 19:06:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.029 [2024-11-27 19:06:00.430957] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:51.029 [2024-11-27 19:06:00.430979] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '8068f02e-eacf-40de-9c26-f44116909aa5' was resized: old size 131072, new size 204800 00:07:51.029 [2024-11-27 19:06:00.431005] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 122880 to 196608 00:07:51.029 19:06:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.029 19:06:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:07:51.029 19:06:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:07:51.029 19:06:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.029 19:06:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.029 19:06:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.029 19:06:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:07:51.029 19:06:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:07:51.029 19:06:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:07:51.029 19:06:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.029 19:06:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.029 19:06:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.029 19:06:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:07:51.029 19:06:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:51.029 19:06:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # jq '.[].num_blocks' 00:07:51.029 19:06:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:51.029 19:06:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:51.029 19:06:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.029 19:06:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.029 [2024-11-27 19:06:00.543035] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:51.029 19:06:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.029 19:06:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:51.029 19:06:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:51.030 19:06:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # (( 196608 == 196608 )) 00:07:51.030 19:06:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:07:51.030 19:06:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.030 19:06:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.030 [2024-11-27 19:06:00.586850] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:07:51.030 [2024-11-27 19:06:00.586917] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:07:51.030 [2024-11-27 19:06:00.586945] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:07:51.030 [2024-11-27 19:06:00.587088] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:51.030 [2024-11-27 19:06:00.587264] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:51.030 [2024-11-27 19:06:00.587325] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:51.030 [2024-11-27 19:06:00.587338] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:07:51.030 19:06:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.030 19:06:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:07:51.030 19:06:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.030 19:06:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.030 [2024-11-27 19:06:00.598799] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:07:51.030 [2024-11-27 19:06:00.598846] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:51.030 [2024-11-27 19:06:00.598866] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:07:51.030 [2024-11-27 19:06:00.598880] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:51.030 [2024-11-27 19:06:00.601305] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:51.030 [2024-11-27 19:06:00.601342] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:07:51.030 [2024-11-27 19:06:00.602966] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 870114ee-3798-4ed3-9d4f-64efc24aa39d 00:07:51.030 [2024-11-27 19:06:00.603049] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 870114ee-3798-4ed3-9d4f-64efc24aa39d is claimed 00:07:51.030 [2024-11-27 19:06:00.603170] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 8068f02e-eacf-40de-9c26-f44116909aa5 00:07:51.030 [2024-11-27 19:06:00.603188] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 8068f02e-eacf-40de-9c26-f44116909aa5 is claimed 00:07:51.030 [2024-11-27 19:06:00.603336] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 8068f02e-eacf-40de-9c26-f44116909aa5 (2) smaller than existing raid bdev Raid (3) 00:07:51.030 [2024-11-27 19:06:00.603361] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev 870114ee-3798-4ed3-9d4f-64efc24aa39d: File exists 00:07:51.030 [2024-11-27 19:06:00.603395] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:07:51.030 [2024-11-27 19:06:00.603408] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:07:51.030 [2024-11-27 19:06:00.603663] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:07:51.030 pt0 00:07:51.030 [2024-11-27 19:06:00.603844] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:07:51.030 [2024-11-27 19:06:00.603906] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:07:51.030 [2024-11-27 19:06:00.604057] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:51.030 19:06:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.030 19:06:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:07:51.030 19:06:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.030 19:06:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.030 19:06:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.030 19:06:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:51.030 19:06:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # jq '.[].num_blocks' 00:07:51.030 19:06:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:51.030 19:06:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:51.030 19:06:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.030 19:06:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.030 [2024-11-27 19:06:00.627200] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:51.030 19:06:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.290 19:06:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:51.290 19:06:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:51.290 19:06:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # (( 196608 == 196608 )) 00:07:51.290 19:06:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 60317 00:07:51.290 19:06:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 60317 ']' 00:07:51.290 19:06:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@958 -- # kill -0 60317 00:07:51.290 19:06:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@959 -- # uname 00:07:51.290 19:06:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:51.290 19:06:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60317 00:07:51.290 killing process with pid 60317 00:07:51.290 19:06:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:51.290 19:06:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:51.290 19:06:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60317' 00:07:51.290 19:06:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@973 -- # kill 60317 00:07:51.290 [2024-11-27 19:06:00.709662] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:51.290 [2024-11-27 19:06:00.709743] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:51.290 [2024-11-27 19:06:00.709792] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:51.290 [2024-11-27 19:06:00.709801] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:07:51.290 19:06:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@978 -- # wait 60317 00:07:52.667 [2024-11-27 19:06:02.252325] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:54.041 ************************************ 00:07:54.041 END TEST raid1_resize_superblock_test 00:07:54.041 ************************************ 00:07:54.041 19:06:03 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:07:54.041 00:07:54.041 real 0m4.921s 00:07:54.041 user 0m4.931s 00:07:54.041 sys 0m0.739s 00:07:54.041 19:06:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:54.041 19:06:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.041 19:06:03 bdev_raid -- bdev/bdev_raid.sh@956 -- # uname -s 00:07:54.041 19:06:03 bdev_raid -- bdev/bdev_raid.sh@956 -- # '[' Linux = Linux ']' 00:07:54.041 19:06:03 bdev_raid -- bdev/bdev_raid.sh@956 -- # modprobe -n nbd 00:07:54.041 19:06:03 bdev_raid -- bdev/bdev_raid.sh@957 -- # has_nbd=true 00:07:54.041 19:06:03 bdev_raid -- bdev/bdev_raid.sh@958 -- # modprobe nbd 00:07:54.041 19:06:03 bdev_raid -- bdev/bdev_raid.sh@959 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:07:54.041 19:06:03 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:54.041 19:06:03 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:54.041 19:06:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:54.041 ************************************ 00:07:54.041 START TEST raid_function_test_raid0 00:07:54.041 ************************************ 00:07:54.041 19:06:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1129 -- # raid_function_test raid0 00:07:54.041 19:06:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@64 -- # local raid_level=raid0 00:07:54.041 19:06:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:07:54.041 19:06:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:07:54.041 19:06:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@69 -- # raid_pid=60425 00:07:54.041 19:06:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:54.041 19:06:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 60425' 00:07:54.041 Process raid pid: 60425 00:07:54.041 19:06:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@71 -- # waitforlisten 60425 00:07:54.041 19:06:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@835 -- # '[' -z 60425 ']' 00:07:54.041 19:06:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:54.041 19:06:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:54.041 19:06:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:54.041 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:54.041 19:06:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:54.041 19:06:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:54.041 [2024-11-27 19:06:03.650366] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:07:54.041 [2024-11-27 19:06:03.650538] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:54.300 [2024-11-27 19:06:03.818565] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:54.557 [2024-11-27 19:06:03.958962] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.816 [2024-11-27 19:06:04.200382] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:54.816 [2024-11-27 19:06:04.200541] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:55.075 19:06:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:55.075 19:06:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@868 -- # return 0 00:07:55.075 19:06:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:07:55.075 19:06:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.075 19:06:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:55.075 Base_1 00:07:55.075 19:06:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.075 19:06:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:07:55.075 19:06:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.075 19:06:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:55.075 Base_2 00:07:55.075 19:06:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.075 19:06:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''Base_1 Base_2'\''' -n raid 00:07:55.075 19:06:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.075 19:06:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:55.075 [2024-11-27 19:06:04.563080] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:55.075 [2024-11-27 19:06:04.565237] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:55.075 [2024-11-27 19:06:04.565308] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:55.075 [2024-11-27 19:06:04.565320] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:55.075 [2024-11-27 19:06:04.565597] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:55.075 [2024-11-27 19:06:04.565774] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:55.075 [2024-11-27 19:06:04.565784] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:07:55.075 [2024-11-27 19:06:04.565948] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:55.075 19:06:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.075 19:06:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:07:55.075 19:06:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.075 19:06:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:07:55.075 19:06:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:55.075 19:06:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.075 19:06:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:07:55.075 19:06:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:07:55.075 19:06:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:07:55.075 19:06:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:07:55.075 19:06:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:07:55.075 19:06:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:55.075 19:06:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:07:55.075 19:06:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:55.075 19:06:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@12 -- # local i 00:07:55.075 19:06:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:55.075 19:06:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:07:55.076 19:06:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:07:55.334 [2024-11-27 19:06:04.794971] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:07:55.334 /dev/nbd0 00:07:55.334 19:06:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:55.334 19:06:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:55.334 19:06:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:55.334 19:06:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@873 -- # local i 00:07:55.334 19:06:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:55.334 19:06:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:55.334 19:06:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:55.334 19:06:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@877 -- # break 00:07:55.334 19:06:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:55.334 19:06:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:55.334 19:06:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:55.334 1+0 records in 00:07:55.334 1+0 records out 00:07:55.334 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000597918 s, 6.9 MB/s 00:07:55.334 19:06:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:55.334 19:06:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # size=4096 00:07:55.334 19:06:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:55.334 19:06:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:55.334 19:06:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@893 -- # return 0 00:07:55.334 19:06:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:55.334 19:06:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:07:55.334 19:06:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:07:55.334 19:06:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:07:55.334 19:06:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:07:55.592 19:06:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:55.592 { 00:07:55.592 "nbd_device": "/dev/nbd0", 00:07:55.592 "bdev_name": "raid" 00:07:55.592 } 00:07:55.592 ]' 00:07:55.592 19:06:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:55.592 { 00:07:55.592 "nbd_device": "/dev/nbd0", 00:07:55.592 "bdev_name": "raid" 00:07:55.592 } 00:07:55.592 ]' 00:07:55.592 19:06:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:55.592 19:06:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:07:55.592 19:06:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:07:55.592 19:06:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:55.592 19:06:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=1 00:07:55.592 19:06:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 1 00:07:55.592 19:06:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # count=1 00:07:55.592 19:06:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:07:55.592 19:06:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:07:55.592 19:06:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:07:55.592 19:06:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:07:55.592 19:06:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@19 -- # local blksize 00:07:55.592 19:06:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:07:55.592 19:06:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:07:55.592 19:06:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:07:55.592 19:06:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # blksize=512 00:07:55.592 19:06:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:07:55.592 19:06:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:07:55.592 19:06:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:07:55.592 19:06:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:07:55.592 19:06:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:07:55.593 19:06:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:07:55.593 19:06:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:07:55.593 19:06:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:07:55.593 19:06:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:07:55.593 4096+0 records in 00:07:55.593 4096+0 records out 00:07:55.593 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0341357 s, 61.4 MB/s 00:07:55.593 19:06:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:07:55.851 4096+0 records in 00:07:55.851 4096+0 records out 00:07:55.851 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.229029 s, 9.2 MB/s 00:07:55.851 19:06:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:07:55.851 19:06:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:55.851 19:06:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:07:55.851 19:06:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:55.851 19:06:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:07:55.851 19:06:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:07:55.851 19:06:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:07:55.851 128+0 records in 00:07:55.851 128+0 records out 00:07:55.851 65536 bytes (66 kB, 64 KiB) copied, 0.00140055 s, 46.8 MB/s 00:07:55.851 19:06:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:07:55.851 19:06:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:55.851 19:06:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:55.851 19:06:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:55.851 19:06:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:55.851 19:06:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:07:55.851 19:06:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:07:55.851 19:06:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:07:55.851 2035+0 records in 00:07:55.851 2035+0 records out 00:07:55.851 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0150585 s, 69.2 MB/s 00:07:55.851 19:06:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:07:56.109 19:06:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:56.109 19:06:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:56.109 19:06:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:56.109 19:06:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:56.109 19:06:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:07:56.109 19:06:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:07:56.109 19:06:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:07:56.109 456+0 records in 00:07:56.109 456+0 records out 00:07:56.109 233472 bytes (233 kB, 228 KiB) copied, 0.00403996 s, 57.8 MB/s 00:07:56.109 19:06:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:07:56.109 19:06:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:56.109 19:06:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:56.109 19:06:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:56.109 19:06:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:56.109 19:06:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@52 -- # return 0 00:07:56.109 19:06:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:07:56.109 19:06:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:07:56.109 19:06:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:56.109 19:06:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:56.110 19:06:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@51 -- # local i 00:07:56.110 19:06:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:56.110 19:06:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:07:56.110 19:06:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:56.368 19:06:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:56.368 19:06:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:56.368 [2024-11-27 19:06:05.746235] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:56.368 19:06:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:56.368 19:06:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:56.368 19:06:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:56.368 19:06:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@41 -- # break 00:07:56.368 19:06:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@45 -- # return 0 00:07:56.368 19:06:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:07:56.368 19:06:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:07:56.368 19:06:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:07:56.368 19:06:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:56.368 19:06:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:56.368 19:06:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:56.627 19:06:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:56.627 19:06:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:56.627 19:06:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo '' 00:07:56.627 19:06:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # true 00:07:56.627 19:06:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=0 00:07:56.627 19:06:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 0 00:07:56.627 19:06:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # count=0 00:07:56.627 19:06:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:07:56.627 19:06:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # killprocess 60425 00:07:56.627 19:06:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@954 -- # '[' -z 60425 ']' 00:07:56.627 19:06:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@958 -- # kill -0 60425 00:07:56.627 19:06:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@959 -- # uname 00:07:56.627 19:06:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:56.627 19:06:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60425 00:07:56.627 killing process with pid 60425 00:07:56.627 19:06:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:56.627 19:06:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:56.627 19:06:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60425' 00:07:56.627 19:06:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@973 -- # kill 60425 00:07:56.627 [2024-11-27 19:06:06.075729] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:56.627 [2024-11-27 19:06:06.075862] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:56.627 19:06:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@978 -- # wait 60425 00:07:56.627 [2024-11-27 19:06:06.075922] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:56.627 [2024-11-27 19:06:06.075940] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:07:56.885 [2024-11-27 19:06:06.294868] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:58.312 19:06:07 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@99 -- # return 0 00:07:58.312 00:07:58.312 real 0m3.949s 00:07:58.312 user 0m4.375s 00:07:58.312 sys 0m1.100s 00:07:58.312 19:06:07 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:58.312 ************************************ 00:07:58.312 END TEST raid_function_test_raid0 00:07:58.312 ************************************ 00:07:58.312 19:06:07 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:58.312 19:06:07 bdev_raid -- bdev/bdev_raid.sh@960 -- # run_test raid_function_test_concat raid_function_test concat 00:07:58.312 19:06:07 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:58.312 19:06:07 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:58.312 19:06:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:58.312 ************************************ 00:07:58.312 START TEST raid_function_test_concat 00:07:58.312 ************************************ 00:07:58.312 19:06:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1129 -- # raid_function_test concat 00:07:58.312 19:06:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@64 -- # local raid_level=concat 00:07:58.312 19:06:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:07:58.312 19:06:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:07:58.312 Process raid pid: 60554 00:07:58.312 19:06:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@69 -- # raid_pid=60554 00:07:58.312 19:06:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 60554' 00:07:58.312 19:06:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:58.312 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:58.312 19:06:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@71 -- # waitforlisten 60554 00:07:58.312 19:06:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@835 -- # '[' -z 60554 ']' 00:07:58.312 19:06:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:58.312 19:06:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:58.312 19:06:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:58.312 19:06:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:58.313 19:06:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:58.313 [2024-11-27 19:06:07.661221] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:07:58.313 [2024-11-27 19:06:07.661415] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:58.313 [2024-11-27 19:06:07.836940] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:58.571 [2024-11-27 19:06:07.975721] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.830 [2024-11-27 19:06:08.209144] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:58.830 [2024-11-27 19:06:08.209311] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:59.089 19:06:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:59.089 19:06:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@868 -- # return 0 00:07:59.089 19:06:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:07:59.089 19:06:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.089 19:06:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:59.089 Base_1 00:07:59.089 19:06:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.089 19:06:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:07:59.089 19:06:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.089 19:06:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:59.089 Base_2 00:07:59.089 19:06:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.089 19:06:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''Base_1 Base_2'\''' -n raid 00:07:59.089 19:06:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.089 19:06:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:59.089 [2024-11-27 19:06:08.589025] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:59.089 [2024-11-27 19:06:08.591256] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:59.089 [2024-11-27 19:06:08.591394] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:59.089 [2024-11-27 19:06:08.591436] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:59.089 [2024-11-27 19:06:08.591775] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:59.089 [2024-11-27 19:06:08.591996] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:59.089 [2024-11-27 19:06:08.592036] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:07:59.089 [2024-11-27 19:06:08.592253] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:59.089 19:06:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.089 19:06:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:07:59.089 19:06:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:07:59.089 19:06:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.089 19:06:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:59.089 19:06:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.089 19:06:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:07:59.089 19:06:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:07:59.089 19:06:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:07:59.089 19:06:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:07:59.089 19:06:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:07:59.089 19:06:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:59.089 19:06:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:07:59.089 19:06:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:59.089 19:06:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@12 -- # local i 00:07:59.089 19:06:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:59.089 19:06:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:07:59.089 19:06:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:07:59.348 [2024-11-27 19:06:08.836657] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:07:59.348 /dev/nbd0 00:07:59.348 19:06:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:59.348 19:06:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:59.348 19:06:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:59.348 19:06:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@873 -- # local i 00:07:59.348 19:06:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:59.348 19:06:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:59.348 19:06:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:59.348 19:06:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@877 -- # break 00:07:59.348 19:06:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:59.348 19:06:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:59.348 19:06:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:59.348 1+0 records in 00:07:59.348 1+0 records out 00:07:59.348 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000609671 s, 6.7 MB/s 00:07:59.348 19:06:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:59.348 19:06:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # size=4096 00:07:59.348 19:06:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:59.348 19:06:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:59.348 19:06:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@893 -- # return 0 00:07:59.348 19:06:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:59.348 19:06:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:07:59.348 19:06:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:07:59.348 19:06:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:07:59.348 19:06:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:07:59.608 19:06:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:59.608 { 00:07:59.608 "nbd_device": "/dev/nbd0", 00:07:59.608 "bdev_name": "raid" 00:07:59.608 } 00:07:59.608 ]' 00:07:59.608 19:06:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:59.608 { 00:07:59.608 "nbd_device": "/dev/nbd0", 00:07:59.608 "bdev_name": "raid" 00:07:59.608 } 00:07:59.608 ]' 00:07:59.608 19:06:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:59.608 19:06:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:07:59.608 19:06:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:07:59.608 19:06:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:59.608 19:06:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=1 00:07:59.608 19:06:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 1 00:07:59.608 19:06:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # count=1 00:07:59.608 19:06:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:07:59.608 19:06:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:07:59.608 19:06:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:07:59.608 19:06:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:07:59.608 19:06:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@19 -- # local blksize 00:07:59.608 19:06:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:07:59.608 19:06:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:07:59.608 19:06:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:07:59.608 19:06:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # blksize=512 00:07:59.608 19:06:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:07:59.608 19:06:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:07:59.608 19:06:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:07:59.608 19:06:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:07:59.608 19:06:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:07:59.608 19:06:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:07:59.608 19:06:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:07:59.608 19:06:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:07:59.608 19:06:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:07:59.608 4096+0 records in 00:07:59.608 4096+0 records out 00:07:59.608 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0326803 s, 64.2 MB/s 00:07:59.608 19:06:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:07:59.866 4096+0 records in 00:07:59.866 4096+0 records out 00:07:59.866 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.235511 s, 8.9 MB/s 00:07:59.866 19:06:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:07:59.866 19:06:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:59.866 19:06:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:07:59.866 19:06:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:59.866 19:06:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:07:59.866 19:06:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:07:59.866 19:06:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:07:59.866 128+0 records in 00:07:59.866 128+0 records out 00:07:59.867 65536 bytes (66 kB, 64 KiB) copied, 0.00119061 s, 55.0 MB/s 00:07:59.867 19:06:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:08:00.126 19:06:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:08:00.126 19:06:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:08:00.126 19:06:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:08:00.126 19:06:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:08:00.126 19:06:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:08:00.126 19:06:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:08:00.126 19:06:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:08:00.126 2035+0 records in 00:08:00.126 2035+0 records out 00:08:00.126 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0134729 s, 77.3 MB/s 00:08:00.126 19:06:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:08:00.126 19:06:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:08:00.126 19:06:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:08:00.126 19:06:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:08:00.126 19:06:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:08:00.126 19:06:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:08:00.126 19:06:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:08:00.126 19:06:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:08:00.126 456+0 records in 00:08:00.126 456+0 records out 00:08:00.126 233472 bytes (233 kB, 228 KiB) copied, 0.00358241 s, 65.2 MB/s 00:08:00.126 19:06:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:08:00.126 19:06:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:08:00.126 19:06:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:08:00.126 19:06:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:08:00.126 19:06:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:08:00.126 19:06:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@52 -- # return 0 00:08:00.126 19:06:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:08:00.126 19:06:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:08:00.126 19:06:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:08:00.126 19:06:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:00.126 19:06:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@51 -- # local i 00:08:00.126 19:06:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:00.126 19:06:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:08:00.384 19:06:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:00.384 [2024-11-27 19:06:09.802042] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:00.385 19:06:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:00.385 19:06:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:00.385 19:06:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:00.385 19:06:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:00.385 19:06:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:00.385 19:06:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@41 -- # break 00:08:00.385 19:06:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@45 -- # return 0 00:08:00.385 19:06:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:08:00.385 19:06:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:08:00.385 19:06:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:08:00.385 19:06:10 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:00.385 19:06:10 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:00.385 19:06:10 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:00.643 19:06:10 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:00.643 19:06:10 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo '' 00:08:00.643 19:06:10 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:00.643 19:06:10 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # true 00:08:00.643 19:06:10 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=0 00:08:00.643 19:06:10 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 0 00:08:00.643 19:06:10 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # count=0 00:08:00.643 19:06:10 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:08:00.643 19:06:10 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # killprocess 60554 00:08:00.643 19:06:10 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@954 -- # '[' -z 60554 ']' 00:08:00.643 19:06:10 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@958 -- # kill -0 60554 00:08:00.643 19:06:10 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@959 -- # uname 00:08:00.643 19:06:10 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:00.643 19:06:10 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60554 00:08:00.643 killing process with pid 60554 00:08:00.643 19:06:10 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:00.643 19:06:10 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:00.643 19:06:10 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60554' 00:08:00.643 19:06:10 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@973 -- # kill 60554 00:08:00.643 [2024-11-27 19:06:10.111844] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:00.643 [2024-11-27 19:06:10.111958] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:00.643 19:06:10 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@978 -- # wait 60554 00:08:00.643 [2024-11-27 19:06:10.112019] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:00.643 [2024-11-27 19:06:10.112033] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:08:00.902 [2024-11-27 19:06:10.329455] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:02.278 ************************************ 00:08:02.278 END TEST raid_function_test_concat 00:08:02.278 ************************************ 00:08:02.278 19:06:11 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@99 -- # return 0 00:08:02.278 00:08:02.278 real 0m3.962s 00:08:02.278 user 0m4.448s 00:08:02.278 sys 0m1.059s 00:08:02.278 19:06:11 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:02.278 19:06:11 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:08:02.278 19:06:11 bdev_raid -- bdev/bdev_raid.sh@963 -- # run_test raid0_resize_test raid_resize_test 0 00:08:02.278 19:06:11 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:02.278 19:06:11 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:02.278 19:06:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:02.278 ************************************ 00:08:02.278 START TEST raid0_resize_test 00:08:02.278 ************************************ 00:08:02.278 19:06:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1129 -- # raid_resize_test 0 00:08:02.278 19:06:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=0 00:08:02.278 19:06:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:08:02.278 19:06:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:08:02.278 19:06:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:08:02.278 19:06:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:08:02.278 19:06:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:08:02.278 19:06:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:08:02.278 Process raid pid: 60677 00:08:02.278 19:06:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:08:02.278 19:06:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=60677 00:08:02.278 19:06:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:02.278 19:06:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 60677' 00:08:02.278 19:06:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 60677 00:08:02.278 19:06:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@835 -- # '[' -z 60677 ']' 00:08:02.278 19:06:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:02.278 19:06:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:02.278 19:06:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:02.278 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:02.278 19:06:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:02.278 19:06:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.279 [2024-11-27 19:06:11.697854] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:08:02.279 [2024-11-27 19:06:11.698097] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:02.279 [2024-11-27 19:06:11.868558] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:02.538 [2024-11-27 19:06:12.004973] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:02.797 [2024-11-27 19:06:12.243875] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:02.797 [2024-11-27 19:06:12.244061] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:03.056 19:06:12 bdev_raid.raid0_resize_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:03.056 19:06:12 bdev_raid.raid0_resize_test -- common/autotest_common.sh@868 -- # return 0 00:08:03.056 19:06:12 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:08:03.056 19:06:12 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.056 19:06:12 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.056 Base_1 00:08:03.056 19:06:12 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.056 19:06:12 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:08:03.056 19:06:12 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.056 19:06:12 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.056 Base_2 00:08:03.056 19:06:12 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.056 19:06:12 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 0 -eq 0 ']' 00:08:03.056 19:06:12 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@350 -- # rpc_cmd bdev_raid_create -z 64 -r 0 -b ''\''Base_1 Base_2'\''' -n Raid 00:08:03.056 19:06:12 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.056 19:06:12 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.056 [2024-11-27 19:06:12.548392] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:08:03.056 [2024-11-27 19:06:12.550559] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:08:03.056 [2024-11-27 19:06:12.550668] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:03.056 [2024-11-27 19:06:12.550721] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:08:03.056 [2024-11-27 19:06:12.551019] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:08:03.056 [2024-11-27 19:06:12.551182] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:03.056 [2024-11-27 19:06:12.551220] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:08:03.056 [2024-11-27 19:06:12.551440] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:03.056 19:06:12 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.056 19:06:12 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:08:03.056 19:06:12 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.056 19:06:12 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.056 [2024-11-27 19:06:12.556343] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:08:03.057 [2024-11-27 19:06:12.556408] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:08:03.057 true 00:08:03.057 19:06:12 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.057 19:06:12 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:08:03.057 19:06:12 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.057 19:06:12 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:08:03.057 19:06:12 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.057 [2024-11-27 19:06:12.572491] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:03.057 19:06:12 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.057 19:06:12 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=131072 00:08:03.057 19:06:12 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=64 00:08:03.057 19:06:12 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 0 -eq 0 ']' 00:08:03.057 19:06:12 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@362 -- # expected_size=64 00:08:03.057 19:06:12 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 64 '!=' 64 ']' 00:08:03.057 19:06:12 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:08:03.057 19:06:12 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.057 19:06:12 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.057 [2024-11-27 19:06:12.616222] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:08:03.057 [2024-11-27 19:06:12.616281] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:08:03.057 [2024-11-27 19:06:12.616336] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 00:08:03.057 true 00:08:03.057 19:06:12 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.057 19:06:12 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:08:03.057 19:06:12 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.057 19:06:12 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.057 19:06:12 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:08:03.057 [2024-11-27 19:06:12.628366] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:03.057 19:06:12 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.057 19:06:12 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=262144 00:08:03.057 19:06:12 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=128 00:08:03.057 19:06:12 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 0 -eq 0 ']' 00:08:03.057 19:06:12 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@378 -- # expected_size=128 00:08:03.057 19:06:12 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 128 '!=' 128 ']' 00:08:03.057 19:06:12 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 60677 00:08:03.057 19:06:12 bdev_raid.raid0_resize_test -- common/autotest_common.sh@954 -- # '[' -z 60677 ']' 00:08:03.057 19:06:12 bdev_raid.raid0_resize_test -- common/autotest_common.sh@958 -- # kill -0 60677 00:08:03.057 19:06:12 bdev_raid.raid0_resize_test -- common/autotest_common.sh@959 -- # uname 00:08:03.057 19:06:12 bdev_raid.raid0_resize_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:03.057 19:06:12 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60677 00:08:03.315 19:06:12 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:03.315 19:06:12 bdev_raid.raid0_resize_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:03.315 19:06:12 bdev_raid.raid0_resize_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60677' 00:08:03.315 killing process with pid 60677 00:08:03.315 19:06:12 bdev_raid.raid0_resize_test -- common/autotest_common.sh@973 -- # kill 60677 00:08:03.315 [2024-11-27 19:06:12.715764] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:03.315 [2024-11-27 19:06:12.715888] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:03.315 [2024-11-27 19:06:12.715967] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:03.315 [2024-11-27 19:06:12.716015] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:08:03.315 19:06:12 bdev_raid.raid0_resize_test -- common/autotest_common.sh@978 -- # wait 60677 00:08:03.315 [2024-11-27 19:06:12.733958] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:04.692 19:06:13 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:08:04.692 00:08:04.692 real 0m2.328s 00:08:04.692 user 0m2.360s 00:08:04.692 sys 0m0.442s 00:08:04.692 19:06:13 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:04.692 19:06:13 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.692 ************************************ 00:08:04.692 END TEST raid0_resize_test 00:08:04.692 ************************************ 00:08:04.692 19:06:13 bdev_raid -- bdev/bdev_raid.sh@964 -- # run_test raid1_resize_test raid_resize_test 1 00:08:04.692 19:06:13 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:04.692 19:06:13 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:04.692 19:06:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:04.692 ************************************ 00:08:04.692 START TEST raid1_resize_test 00:08:04.692 ************************************ 00:08:04.692 19:06:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1129 -- # raid_resize_test 1 00:08:04.692 19:06:13 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=1 00:08:04.692 19:06:13 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:08:04.692 19:06:13 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:08:04.692 19:06:13 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:08:04.692 19:06:13 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:08:04.692 19:06:13 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:08:04.692 19:06:13 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:08:04.692 19:06:13 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:08:04.693 19:06:14 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=60738 00:08:04.693 19:06:14 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:04.693 19:06:14 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 60738' 00:08:04.693 Process raid pid: 60738 00:08:04.693 19:06:14 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 60738 00:08:04.693 19:06:14 bdev_raid.raid1_resize_test -- common/autotest_common.sh@835 -- # '[' -z 60738 ']' 00:08:04.693 19:06:14 bdev_raid.raid1_resize_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:04.693 19:06:14 bdev_raid.raid1_resize_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:04.693 19:06:14 bdev_raid.raid1_resize_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:04.693 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:04.693 19:06:14 bdev_raid.raid1_resize_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:04.693 19:06:14 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.693 [2024-11-27 19:06:14.093215] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:08:04.693 [2024-11-27 19:06:14.093439] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:04.693 [2024-11-27 19:06:14.262997] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:04.952 [2024-11-27 19:06:14.406458] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:05.211 [2024-11-27 19:06:14.642587] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:05.212 [2024-11-27 19:06:14.642777] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:05.471 19:06:14 bdev_raid.raid1_resize_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:05.471 19:06:14 bdev_raid.raid1_resize_test -- common/autotest_common.sh@868 -- # return 0 00:08:05.471 19:06:14 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:08:05.471 19:06:14 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.471 19:06:14 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.471 Base_1 00:08:05.471 19:06:14 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.471 19:06:14 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:08:05.471 19:06:14 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.471 19:06:14 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.471 Base_2 00:08:05.471 19:06:14 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.471 19:06:14 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 1 -eq 0 ']' 00:08:05.471 19:06:14 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@352 -- # rpc_cmd bdev_raid_create -r 1 -b ''\''Base_1 Base_2'\''' -n Raid 00:08:05.471 19:06:14 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.471 19:06:14 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.471 [2024-11-27 19:06:14.956185] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:08:05.471 [2024-11-27 19:06:14.958178] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:08:05.471 [2024-11-27 19:06:14.958240] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:05.471 [2024-11-27 19:06:14.958252] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:08:05.471 [2024-11-27 19:06:14.958506] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:08:05.471 [2024-11-27 19:06:14.958643] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:05.471 [2024-11-27 19:06:14.958669] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:08:05.471 [2024-11-27 19:06:14.958834] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:05.471 19:06:14 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.471 19:06:14 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:08:05.471 19:06:14 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.471 19:06:14 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.471 [2024-11-27 19:06:14.968140] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:08:05.471 [2024-11-27 19:06:14.968171] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:08:05.471 true 00:08:05.471 19:06:14 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.471 19:06:14 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:08:05.471 19:06:14 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:08:05.471 19:06:14 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.471 19:06:14 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.471 [2024-11-27 19:06:14.984285] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:05.471 19:06:15 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.471 19:06:15 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=65536 00:08:05.471 19:06:15 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=32 00:08:05.471 19:06:15 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 1 -eq 0 ']' 00:08:05.471 19:06:15 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@364 -- # expected_size=32 00:08:05.471 19:06:15 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 32 '!=' 32 ']' 00:08:05.471 19:06:15 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:08:05.471 19:06:15 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.471 19:06:15 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.472 [2024-11-27 19:06:15.032019] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:08:05.472 [2024-11-27 19:06:15.032083] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:08:05.472 [2024-11-27 19:06:15.032139] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 65536 to 131072 00:08:05.472 true 00:08:05.472 19:06:15 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.472 19:06:15 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:08:05.472 19:06:15 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:08:05.472 19:06:15 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.472 19:06:15 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.472 [2024-11-27 19:06:15.044159] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:05.472 19:06:15 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.472 19:06:15 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=131072 00:08:05.472 19:06:15 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=64 00:08:05.472 19:06:15 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 1 -eq 0 ']' 00:08:05.472 19:06:15 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@380 -- # expected_size=64 00:08:05.472 19:06:15 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 64 '!=' 64 ']' 00:08:05.472 19:06:15 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 60738 00:08:05.472 19:06:15 bdev_raid.raid1_resize_test -- common/autotest_common.sh@954 -- # '[' -z 60738 ']' 00:08:05.472 19:06:15 bdev_raid.raid1_resize_test -- common/autotest_common.sh@958 -- # kill -0 60738 00:08:05.472 19:06:15 bdev_raid.raid1_resize_test -- common/autotest_common.sh@959 -- # uname 00:08:05.472 19:06:15 bdev_raid.raid1_resize_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:05.472 19:06:15 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60738 00:08:05.731 19:06:15 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:05.731 19:06:15 bdev_raid.raid1_resize_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:05.731 19:06:15 bdev_raid.raid1_resize_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60738' 00:08:05.731 killing process with pid 60738 00:08:05.731 19:06:15 bdev_raid.raid1_resize_test -- common/autotest_common.sh@973 -- # kill 60738 00:08:05.731 [2024-11-27 19:06:15.128883] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:05.731 [2024-11-27 19:06:15.129010] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:05.731 19:06:15 bdev_raid.raid1_resize_test -- common/autotest_common.sh@978 -- # wait 60738 00:08:05.731 [2024-11-27 19:06:15.129525] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:05.731 [2024-11-27 19:06:15.129593] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:08:05.731 [2024-11-27 19:06:15.147439] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:07.110 19:06:16 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:08:07.110 00:08:07.110 real 0m2.353s 00:08:07.110 user 0m2.402s 00:08:07.110 sys 0m0.436s 00:08:07.110 19:06:16 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:07.110 19:06:16 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.110 ************************************ 00:08:07.110 END TEST raid1_resize_test 00:08:07.110 ************************************ 00:08:07.110 19:06:16 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:08:07.110 19:06:16 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:07.110 19:06:16 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:08:07.110 19:06:16 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:07.110 19:06:16 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:07.110 19:06:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:07.110 ************************************ 00:08:07.110 START TEST raid_state_function_test 00:08:07.110 ************************************ 00:08:07.110 19:06:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 2 false 00:08:07.110 19:06:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:08:07.110 19:06:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:08:07.110 19:06:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:07.110 19:06:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:07.110 19:06:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:07.110 19:06:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:07.110 19:06:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:07.110 19:06:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:07.110 19:06:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:07.110 19:06:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:07.110 19:06:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:07.110 19:06:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:07.110 19:06:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:07.110 19:06:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:07.110 19:06:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:07.110 19:06:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:07.110 19:06:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:07.110 19:06:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:07.110 19:06:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:08:07.110 19:06:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:07.110 19:06:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:07.110 19:06:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:07.110 19:06:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:07.110 19:06:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=60795 00:08:07.110 19:06:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:07.110 Process raid pid: 60795 00:08:07.110 19:06:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 60795' 00:08:07.110 19:06:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 60795 00:08:07.110 19:06:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 60795 ']' 00:08:07.110 19:06:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:07.110 19:06:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:07.110 19:06:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:07.110 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:07.110 19:06:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:07.110 19:06:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.110 [2024-11-27 19:06:16.518370] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:08:07.110 [2024-11-27 19:06:16.518567] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:07.110 [2024-11-27 19:06:16.690804] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:07.369 [2024-11-27 19:06:16.833202] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.639 [2024-11-27 19:06:17.068176] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:07.639 [2024-11-27 19:06:17.068324] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:07.918 19:06:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:07.918 19:06:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:08:07.918 19:06:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:07.918 19:06:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.918 19:06:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.918 [2024-11-27 19:06:17.351396] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:07.918 [2024-11-27 19:06:17.351468] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:07.918 [2024-11-27 19:06:17.351479] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:07.918 [2024-11-27 19:06:17.351489] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:07.918 19:06:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.918 19:06:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:08:07.918 19:06:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:07.918 19:06:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:07.918 19:06:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:07.918 19:06:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:07.918 19:06:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:07.918 19:06:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:07.918 19:06:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:07.918 19:06:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:07.918 19:06:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:07.918 19:06:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:07.918 19:06:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:07.918 19:06:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.918 19:06:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.918 19:06:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.918 19:06:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:07.918 "name": "Existed_Raid", 00:08:07.918 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:07.918 "strip_size_kb": 64, 00:08:07.918 "state": "configuring", 00:08:07.918 "raid_level": "raid0", 00:08:07.918 "superblock": false, 00:08:07.918 "num_base_bdevs": 2, 00:08:07.918 "num_base_bdevs_discovered": 0, 00:08:07.918 "num_base_bdevs_operational": 2, 00:08:07.918 "base_bdevs_list": [ 00:08:07.918 { 00:08:07.918 "name": "BaseBdev1", 00:08:07.918 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:07.918 "is_configured": false, 00:08:07.918 "data_offset": 0, 00:08:07.918 "data_size": 0 00:08:07.918 }, 00:08:07.918 { 00:08:07.918 "name": "BaseBdev2", 00:08:07.918 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:07.918 "is_configured": false, 00:08:07.918 "data_offset": 0, 00:08:07.918 "data_size": 0 00:08:07.918 } 00:08:07.918 ] 00:08:07.918 }' 00:08:07.918 19:06:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:07.918 19:06:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.177 19:06:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:08.177 19:06:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.177 19:06:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.177 [2024-11-27 19:06:17.754630] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:08.177 [2024-11-27 19:06:17.754725] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:08.177 19:06:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.177 19:06:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:08.177 19:06:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.177 19:06:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.177 [2024-11-27 19:06:17.762595] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:08.177 [2024-11-27 19:06:17.762687] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:08.177 [2024-11-27 19:06:17.762726] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:08.177 [2024-11-27 19:06:17.762753] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:08.177 19:06:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.177 19:06:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:08.177 19:06:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.177 19:06:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.436 [2024-11-27 19:06:17.812142] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:08.436 BaseBdev1 00:08:08.436 19:06:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.436 19:06:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:08.436 19:06:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:08.436 19:06:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:08.436 19:06:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:08.436 19:06:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:08.436 19:06:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:08.436 19:06:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:08.436 19:06:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.436 19:06:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.436 19:06:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.436 19:06:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:08.436 19:06:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.436 19:06:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.436 [ 00:08:08.436 { 00:08:08.436 "name": "BaseBdev1", 00:08:08.436 "aliases": [ 00:08:08.436 "6a3f9146-1e2d-45f0-8374-354e4d36f12d" 00:08:08.436 ], 00:08:08.436 "product_name": "Malloc disk", 00:08:08.436 "block_size": 512, 00:08:08.436 "num_blocks": 65536, 00:08:08.436 "uuid": "6a3f9146-1e2d-45f0-8374-354e4d36f12d", 00:08:08.436 "assigned_rate_limits": { 00:08:08.436 "rw_ios_per_sec": 0, 00:08:08.436 "rw_mbytes_per_sec": 0, 00:08:08.436 "r_mbytes_per_sec": 0, 00:08:08.436 "w_mbytes_per_sec": 0 00:08:08.436 }, 00:08:08.436 "claimed": true, 00:08:08.436 "claim_type": "exclusive_write", 00:08:08.436 "zoned": false, 00:08:08.436 "supported_io_types": { 00:08:08.436 "read": true, 00:08:08.436 "write": true, 00:08:08.436 "unmap": true, 00:08:08.436 "flush": true, 00:08:08.436 "reset": true, 00:08:08.436 "nvme_admin": false, 00:08:08.436 "nvme_io": false, 00:08:08.436 "nvme_io_md": false, 00:08:08.436 "write_zeroes": true, 00:08:08.436 "zcopy": true, 00:08:08.436 "get_zone_info": false, 00:08:08.436 "zone_management": false, 00:08:08.436 "zone_append": false, 00:08:08.436 "compare": false, 00:08:08.436 "compare_and_write": false, 00:08:08.436 "abort": true, 00:08:08.436 "seek_hole": false, 00:08:08.436 "seek_data": false, 00:08:08.436 "copy": true, 00:08:08.436 "nvme_iov_md": false 00:08:08.436 }, 00:08:08.436 "memory_domains": [ 00:08:08.436 { 00:08:08.436 "dma_device_id": "system", 00:08:08.436 "dma_device_type": 1 00:08:08.436 }, 00:08:08.436 { 00:08:08.436 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:08.436 "dma_device_type": 2 00:08:08.436 } 00:08:08.436 ], 00:08:08.436 "driver_specific": {} 00:08:08.436 } 00:08:08.436 ] 00:08:08.436 19:06:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.436 19:06:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:08.436 19:06:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:08:08.436 19:06:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:08.436 19:06:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:08.436 19:06:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:08.436 19:06:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:08.436 19:06:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:08.436 19:06:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:08.436 19:06:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:08.436 19:06:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:08.436 19:06:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:08.436 19:06:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:08.436 19:06:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:08.436 19:06:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.436 19:06:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.436 19:06:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.436 19:06:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:08.436 "name": "Existed_Raid", 00:08:08.436 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:08.436 "strip_size_kb": 64, 00:08:08.436 "state": "configuring", 00:08:08.436 "raid_level": "raid0", 00:08:08.436 "superblock": false, 00:08:08.436 "num_base_bdevs": 2, 00:08:08.436 "num_base_bdevs_discovered": 1, 00:08:08.436 "num_base_bdevs_operational": 2, 00:08:08.436 "base_bdevs_list": [ 00:08:08.436 { 00:08:08.436 "name": "BaseBdev1", 00:08:08.436 "uuid": "6a3f9146-1e2d-45f0-8374-354e4d36f12d", 00:08:08.436 "is_configured": true, 00:08:08.436 "data_offset": 0, 00:08:08.436 "data_size": 65536 00:08:08.436 }, 00:08:08.436 { 00:08:08.436 "name": "BaseBdev2", 00:08:08.436 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:08.436 "is_configured": false, 00:08:08.436 "data_offset": 0, 00:08:08.436 "data_size": 0 00:08:08.436 } 00:08:08.436 ] 00:08:08.436 }' 00:08:08.436 19:06:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:08.436 19:06:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.695 19:06:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:08.695 19:06:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.695 19:06:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.695 [2024-11-27 19:06:18.299351] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:08.695 [2024-11-27 19:06:18.299408] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:08.695 19:06:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.695 19:06:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:08.695 19:06:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.695 19:06:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.695 [2024-11-27 19:06:18.311357] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:08.695 [2024-11-27 19:06:18.313405] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:08.695 [2024-11-27 19:06:18.313452] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:08.695 19:06:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.695 19:06:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:08.695 19:06:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:08.695 19:06:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:08:08.695 19:06:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:08.695 19:06:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:08.695 19:06:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:08.695 19:06:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:08.695 19:06:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:08.695 19:06:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:08.695 19:06:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:08.695 19:06:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:08.695 19:06:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:08.695 19:06:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:08.695 19:06:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.695 19:06:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.695 19:06:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:08.954 19:06:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.954 19:06:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:08.954 "name": "Existed_Raid", 00:08:08.954 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:08.954 "strip_size_kb": 64, 00:08:08.954 "state": "configuring", 00:08:08.954 "raid_level": "raid0", 00:08:08.954 "superblock": false, 00:08:08.954 "num_base_bdevs": 2, 00:08:08.954 "num_base_bdevs_discovered": 1, 00:08:08.954 "num_base_bdevs_operational": 2, 00:08:08.954 "base_bdevs_list": [ 00:08:08.954 { 00:08:08.954 "name": "BaseBdev1", 00:08:08.954 "uuid": "6a3f9146-1e2d-45f0-8374-354e4d36f12d", 00:08:08.954 "is_configured": true, 00:08:08.954 "data_offset": 0, 00:08:08.954 "data_size": 65536 00:08:08.954 }, 00:08:08.954 { 00:08:08.954 "name": "BaseBdev2", 00:08:08.954 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:08.954 "is_configured": false, 00:08:08.954 "data_offset": 0, 00:08:08.954 "data_size": 0 00:08:08.954 } 00:08:08.954 ] 00:08:08.954 }' 00:08:08.954 19:06:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:08.954 19:06:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.213 19:06:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:09.213 19:06:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.213 19:06:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.213 [2024-11-27 19:06:18.788867] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:09.213 [2024-11-27 19:06:18.789006] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:09.213 [2024-11-27 19:06:18.789032] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:08:09.213 [2024-11-27 19:06:18.789366] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:09.213 [2024-11-27 19:06:18.789606] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:09.213 [2024-11-27 19:06:18.789652] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:09.213 [2024-11-27 19:06:18.789980] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:09.213 BaseBdev2 00:08:09.213 19:06:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.213 19:06:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:09.213 19:06:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:09.213 19:06:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:09.213 19:06:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:09.213 19:06:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:09.213 19:06:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:09.213 19:06:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:09.213 19:06:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.213 19:06:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.213 19:06:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.213 19:06:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:09.213 19:06:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.213 19:06:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.213 [ 00:08:09.213 { 00:08:09.213 "name": "BaseBdev2", 00:08:09.213 "aliases": [ 00:08:09.213 "e50fcca7-2052-4b33-8a7e-b2482af5d34b" 00:08:09.213 ], 00:08:09.213 "product_name": "Malloc disk", 00:08:09.213 "block_size": 512, 00:08:09.213 "num_blocks": 65536, 00:08:09.213 "uuid": "e50fcca7-2052-4b33-8a7e-b2482af5d34b", 00:08:09.213 "assigned_rate_limits": { 00:08:09.213 "rw_ios_per_sec": 0, 00:08:09.213 "rw_mbytes_per_sec": 0, 00:08:09.213 "r_mbytes_per_sec": 0, 00:08:09.213 "w_mbytes_per_sec": 0 00:08:09.213 }, 00:08:09.213 "claimed": true, 00:08:09.213 "claim_type": "exclusive_write", 00:08:09.213 "zoned": false, 00:08:09.213 "supported_io_types": { 00:08:09.213 "read": true, 00:08:09.213 "write": true, 00:08:09.213 "unmap": true, 00:08:09.213 "flush": true, 00:08:09.213 "reset": true, 00:08:09.213 "nvme_admin": false, 00:08:09.213 "nvme_io": false, 00:08:09.213 "nvme_io_md": false, 00:08:09.213 "write_zeroes": true, 00:08:09.213 "zcopy": true, 00:08:09.213 "get_zone_info": false, 00:08:09.213 "zone_management": false, 00:08:09.213 "zone_append": false, 00:08:09.213 "compare": false, 00:08:09.213 "compare_and_write": false, 00:08:09.213 "abort": true, 00:08:09.213 "seek_hole": false, 00:08:09.213 "seek_data": false, 00:08:09.213 "copy": true, 00:08:09.213 "nvme_iov_md": false 00:08:09.213 }, 00:08:09.213 "memory_domains": [ 00:08:09.213 { 00:08:09.213 "dma_device_id": "system", 00:08:09.213 "dma_device_type": 1 00:08:09.213 }, 00:08:09.213 { 00:08:09.213 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:09.213 "dma_device_type": 2 00:08:09.213 } 00:08:09.213 ], 00:08:09.213 "driver_specific": {} 00:08:09.213 } 00:08:09.213 ] 00:08:09.213 19:06:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.213 19:06:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:09.213 19:06:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:09.213 19:06:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:09.213 19:06:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:08:09.213 19:06:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:09.213 19:06:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:09.213 19:06:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:09.213 19:06:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:09.213 19:06:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:09.213 19:06:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:09.213 19:06:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:09.213 19:06:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:09.213 19:06:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:09.213 19:06:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:09.213 19:06:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:09.213 19:06:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.213 19:06:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.472 19:06:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.472 19:06:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:09.472 "name": "Existed_Raid", 00:08:09.472 "uuid": "cb360024-a66b-4f88-88d2-1718c09a8e65", 00:08:09.472 "strip_size_kb": 64, 00:08:09.472 "state": "online", 00:08:09.472 "raid_level": "raid0", 00:08:09.472 "superblock": false, 00:08:09.472 "num_base_bdevs": 2, 00:08:09.472 "num_base_bdevs_discovered": 2, 00:08:09.472 "num_base_bdevs_operational": 2, 00:08:09.472 "base_bdevs_list": [ 00:08:09.472 { 00:08:09.472 "name": "BaseBdev1", 00:08:09.472 "uuid": "6a3f9146-1e2d-45f0-8374-354e4d36f12d", 00:08:09.472 "is_configured": true, 00:08:09.472 "data_offset": 0, 00:08:09.472 "data_size": 65536 00:08:09.472 }, 00:08:09.472 { 00:08:09.472 "name": "BaseBdev2", 00:08:09.472 "uuid": "e50fcca7-2052-4b33-8a7e-b2482af5d34b", 00:08:09.472 "is_configured": true, 00:08:09.472 "data_offset": 0, 00:08:09.473 "data_size": 65536 00:08:09.473 } 00:08:09.473 ] 00:08:09.473 }' 00:08:09.473 19:06:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:09.473 19:06:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.733 19:06:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:09.734 19:06:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:09.734 19:06:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:09.734 19:06:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:09.734 19:06:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:09.734 19:06:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:09.734 19:06:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:09.734 19:06:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:09.734 19:06:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.734 19:06:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.734 [2024-11-27 19:06:19.272386] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:09.734 19:06:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.734 19:06:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:09.734 "name": "Existed_Raid", 00:08:09.734 "aliases": [ 00:08:09.734 "cb360024-a66b-4f88-88d2-1718c09a8e65" 00:08:09.734 ], 00:08:09.734 "product_name": "Raid Volume", 00:08:09.734 "block_size": 512, 00:08:09.734 "num_blocks": 131072, 00:08:09.734 "uuid": "cb360024-a66b-4f88-88d2-1718c09a8e65", 00:08:09.734 "assigned_rate_limits": { 00:08:09.734 "rw_ios_per_sec": 0, 00:08:09.734 "rw_mbytes_per_sec": 0, 00:08:09.734 "r_mbytes_per_sec": 0, 00:08:09.734 "w_mbytes_per_sec": 0 00:08:09.734 }, 00:08:09.734 "claimed": false, 00:08:09.734 "zoned": false, 00:08:09.734 "supported_io_types": { 00:08:09.734 "read": true, 00:08:09.734 "write": true, 00:08:09.734 "unmap": true, 00:08:09.734 "flush": true, 00:08:09.734 "reset": true, 00:08:09.734 "nvme_admin": false, 00:08:09.734 "nvme_io": false, 00:08:09.734 "nvme_io_md": false, 00:08:09.734 "write_zeroes": true, 00:08:09.734 "zcopy": false, 00:08:09.734 "get_zone_info": false, 00:08:09.734 "zone_management": false, 00:08:09.734 "zone_append": false, 00:08:09.734 "compare": false, 00:08:09.734 "compare_and_write": false, 00:08:09.734 "abort": false, 00:08:09.734 "seek_hole": false, 00:08:09.734 "seek_data": false, 00:08:09.734 "copy": false, 00:08:09.734 "nvme_iov_md": false 00:08:09.734 }, 00:08:09.734 "memory_domains": [ 00:08:09.734 { 00:08:09.734 "dma_device_id": "system", 00:08:09.734 "dma_device_type": 1 00:08:09.734 }, 00:08:09.734 { 00:08:09.734 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:09.734 "dma_device_type": 2 00:08:09.734 }, 00:08:09.734 { 00:08:09.734 "dma_device_id": "system", 00:08:09.734 "dma_device_type": 1 00:08:09.734 }, 00:08:09.734 { 00:08:09.734 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:09.734 "dma_device_type": 2 00:08:09.734 } 00:08:09.734 ], 00:08:09.734 "driver_specific": { 00:08:09.734 "raid": { 00:08:09.734 "uuid": "cb360024-a66b-4f88-88d2-1718c09a8e65", 00:08:09.734 "strip_size_kb": 64, 00:08:09.734 "state": "online", 00:08:09.734 "raid_level": "raid0", 00:08:09.734 "superblock": false, 00:08:09.734 "num_base_bdevs": 2, 00:08:09.734 "num_base_bdevs_discovered": 2, 00:08:09.734 "num_base_bdevs_operational": 2, 00:08:09.734 "base_bdevs_list": [ 00:08:09.734 { 00:08:09.734 "name": "BaseBdev1", 00:08:09.734 "uuid": "6a3f9146-1e2d-45f0-8374-354e4d36f12d", 00:08:09.734 "is_configured": true, 00:08:09.734 "data_offset": 0, 00:08:09.734 "data_size": 65536 00:08:09.734 }, 00:08:09.734 { 00:08:09.734 "name": "BaseBdev2", 00:08:09.734 "uuid": "e50fcca7-2052-4b33-8a7e-b2482af5d34b", 00:08:09.734 "is_configured": true, 00:08:09.734 "data_offset": 0, 00:08:09.734 "data_size": 65536 00:08:09.734 } 00:08:09.734 ] 00:08:09.734 } 00:08:09.734 } 00:08:09.734 }' 00:08:09.734 19:06:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:09.734 19:06:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:09.734 BaseBdev2' 00:08:09.734 19:06:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:09.993 19:06:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:09.993 19:06:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:09.993 19:06:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:09.993 19:06:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.993 19:06:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:09.993 19:06:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.993 19:06:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.993 19:06:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:09.993 19:06:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:09.993 19:06:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:09.993 19:06:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:09.993 19:06:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.993 19:06:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.993 19:06:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:09.993 19:06:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.993 19:06:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:09.993 19:06:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:09.993 19:06:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:09.993 19:06:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.993 19:06:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.993 [2024-11-27 19:06:19.487794] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:09.993 [2024-11-27 19:06:19.487837] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:09.993 [2024-11-27 19:06:19.487895] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:09.993 19:06:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.993 19:06:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:09.993 19:06:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:08:09.993 19:06:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:09.993 19:06:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:09.993 19:06:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:09.993 19:06:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:08:09.993 19:06:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:09.993 19:06:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:09.993 19:06:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:09.993 19:06:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:09.993 19:06:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:09.993 19:06:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:09.993 19:06:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:09.993 19:06:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:09.993 19:06:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:09.993 19:06:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:09.993 19:06:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:09.993 19:06:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.993 19:06:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.993 19:06:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.251 19:06:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:10.251 "name": "Existed_Raid", 00:08:10.251 "uuid": "cb360024-a66b-4f88-88d2-1718c09a8e65", 00:08:10.251 "strip_size_kb": 64, 00:08:10.251 "state": "offline", 00:08:10.251 "raid_level": "raid0", 00:08:10.251 "superblock": false, 00:08:10.251 "num_base_bdevs": 2, 00:08:10.251 "num_base_bdevs_discovered": 1, 00:08:10.251 "num_base_bdevs_operational": 1, 00:08:10.251 "base_bdevs_list": [ 00:08:10.251 { 00:08:10.251 "name": null, 00:08:10.251 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:10.251 "is_configured": false, 00:08:10.251 "data_offset": 0, 00:08:10.251 "data_size": 65536 00:08:10.251 }, 00:08:10.251 { 00:08:10.251 "name": "BaseBdev2", 00:08:10.252 "uuid": "e50fcca7-2052-4b33-8a7e-b2482af5d34b", 00:08:10.252 "is_configured": true, 00:08:10.252 "data_offset": 0, 00:08:10.252 "data_size": 65536 00:08:10.252 } 00:08:10.252 ] 00:08:10.252 }' 00:08:10.252 19:06:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:10.252 19:06:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.510 19:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:10.510 19:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:10.510 19:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:10.510 19:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:10.510 19:06:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.510 19:06:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.510 19:06:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.510 19:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:10.510 19:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:10.510 19:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:10.510 19:06:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.510 19:06:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.510 [2024-11-27 19:06:20.103289] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:10.510 [2024-11-27 19:06:20.103356] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:10.769 19:06:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.769 19:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:10.769 19:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:10.769 19:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:10.769 19:06:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.769 19:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:10.770 19:06:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.770 19:06:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.770 19:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:10.770 19:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:10.770 19:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:08:10.770 19:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 60795 00:08:10.770 19:06:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 60795 ']' 00:08:10.770 19:06:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 60795 00:08:10.770 19:06:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:08:10.770 19:06:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:10.770 19:06:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60795 00:08:10.770 killing process with pid 60795 00:08:10.770 19:06:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:10.770 19:06:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:10.770 19:06:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60795' 00:08:10.770 19:06:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 60795 00:08:10.770 [2024-11-27 19:06:20.299578] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:10.770 19:06:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 60795 00:08:10.770 [2024-11-27 19:06:20.317717] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:12.144 19:06:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:08:12.144 00:08:12.144 real 0m5.102s 00:08:12.144 user 0m7.187s 00:08:12.144 sys 0m0.923s 00:08:12.144 19:06:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:12.144 ************************************ 00:08:12.144 END TEST raid_state_function_test 00:08:12.144 ************************************ 00:08:12.144 19:06:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.144 19:06:21 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:08:12.144 19:06:21 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:12.144 19:06:21 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:12.144 19:06:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:12.144 ************************************ 00:08:12.144 START TEST raid_state_function_test_sb 00:08:12.144 ************************************ 00:08:12.144 19:06:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 2 true 00:08:12.144 19:06:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:08:12.144 19:06:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:08:12.144 19:06:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:08:12.144 19:06:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:12.144 19:06:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:12.144 19:06:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:12.144 19:06:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:12.144 19:06:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:12.144 19:06:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:12.144 19:06:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:12.144 19:06:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:12.144 19:06:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:12.144 19:06:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:12.144 19:06:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:12.144 19:06:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:12.144 19:06:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:12.144 19:06:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:12.144 19:06:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:12.144 19:06:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:08:12.144 19:06:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:12.144 19:06:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:12.144 19:06:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:08:12.144 19:06:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:08:12.144 19:06:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=61048 00:08:12.144 19:06:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:12.144 19:06:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61048' 00:08:12.144 Process raid pid: 61048 00:08:12.144 19:06:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 61048 00:08:12.144 19:06:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 61048 ']' 00:08:12.144 19:06:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:12.144 19:06:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:12.144 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:12.144 19:06:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:12.144 19:06:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:12.144 19:06:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:12.144 [2024-11-27 19:06:21.695670] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:08:12.144 [2024-11-27 19:06:21.695810] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:12.402 [2024-11-27 19:06:21.859281] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:12.402 [2024-11-27 19:06:21.999887] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:12.661 [2024-11-27 19:06:22.241569] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:12.661 [2024-11-27 19:06:22.241620] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:12.919 19:06:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:12.919 19:06:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:08:12.919 19:06:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:12.919 19:06:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.919 19:06:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:12.919 [2024-11-27 19:06:22.529876] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:12.919 [2024-11-27 19:06:22.529938] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:12.919 [2024-11-27 19:06:22.529951] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:12.919 [2024-11-27 19:06:22.529961] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:12.919 19:06:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.919 19:06:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:08:12.919 19:06:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:12.919 19:06:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:12.919 19:06:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:12.919 19:06:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:12.919 19:06:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:12.919 19:06:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:12.919 19:06:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:12.919 19:06:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:12.919 19:06:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:12.919 19:06:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:12.919 19:06:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.919 19:06:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:12.919 19:06:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:12.919 19:06:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.177 19:06:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:13.178 "name": "Existed_Raid", 00:08:13.178 "uuid": "8862f0f8-19ab-4a44-87c9-7a76e0d98d3a", 00:08:13.178 "strip_size_kb": 64, 00:08:13.178 "state": "configuring", 00:08:13.178 "raid_level": "raid0", 00:08:13.178 "superblock": true, 00:08:13.178 "num_base_bdevs": 2, 00:08:13.178 "num_base_bdevs_discovered": 0, 00:08:13.178 "num_base_bdevs_operational": 2, 00:08:13.178 "base_bdevs_list": [ 00:08:13.178 { 00:08:13.178 "name": "BaseBdev1", 00:08:13.178 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:13.178 "is_configured": false, 00:08:13.178 "data_offset": 0, 00:08:13.178 "data_size": 0 00:08:13.178 }, 00:08:13.178 { 00:08:13.178 "name": "BaseBdev2", 00:08:13.178 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:13.178 "is_configured": false, 00:08:13.178 "data_offset": 0, 00:08:13.178 "data_size": 0 00:08:13.178 } 00:08:13.178 ] 00:08:13.178 }' 00:08:13.178 19:06:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:13.178 19:06:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:13.436 19:06:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:13.436 19:06:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.436 19:06:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:13.436 [2024-11-27 19:06:22.921084] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:13.436 [2024-11-27 19:06:22.921124] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:13.436 19:06:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.436 19:06:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:13.436 19:06:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.436 19:06:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:13.436 [2024-11-27 19:06:22.933072] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:13.436 [2024-11-27 19:06:22.933115] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:13.436 [2024-11-27 19:06:22.933124] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:13.436 [2024-11-27 19:06:22.933138] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:13.436 19:06:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.436 19:06:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:13.436 19:06:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.436 19:06:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:13.436 [2024-11-27 19:06:22.987011] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:13.436 BaseBdev1 00:08:13.436 19:06:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.436 19:06:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:13.436 19:06:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:13.436 19:06:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:13.436 19:06:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:13.436 19:06:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:13.436 19:06:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:13.436 19:06:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:13.436 19:06:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.436 19:06:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:13.436 19:06:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.436 19:06:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:13.437 19:06:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.437 19:06:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:13.437 [ 00:08:13.437 { 00:08:13.437 "name": "BaseBdev1", 00:08:13.437 "aliases": [ 00:08:13.437 "101b8e00-7c0b-4f13-941f-6b3b73155436" 00:08:13.437 ], 00:08:13.437 "product_name": "Malloc disk", 00:08:13.437 "block_size": 512, 00:08:13.437 "num_blocks": 65536, 00:08:13.437 "uuid": "101b8e00-7c0b-4f13-941f-6b3b73155436", 00:08:13.437 "assigned_rate_limits": { 00:08:13.437 "rw_ios_per_sec": 0, 00:08:13.437 "rw_mbytes_per_sec": 0, 00:08:13.437 "r_mbytes_per_sec": 0, 00:08:13.437 "w_mbytes_per_sec": 0 00:08:13.437 }, 00:08:13.437 "claimed": true, 00:08:13.437 "claim_type": "exclusive_write", 00:08:13.437 "zoned": false, 00:08:13.437 "supported_io_types": { 00:08:13.437 "read": true, 00:08:13.437 "write": true, 00:08:13.437 "unmap": true, 00:08:13.437 "flush": true, 00:08:13.437 "reset": true, 00:08:13.437 "nvme_admin": false, 00:08:13.437 "nvme_io": false, 00:08:13.437 "nvme_io_md": false, 00:08:13.437 "write_zeroes": true, 00:08:13.437 "zcopy": true, 00:08:13.437 "get_zone_info": false, 00:08:13.437 "zone_management": false, 00:08:13.437 "zone_append": false, 00:08:13.437 "compare": false, 00:08:13.437 "compare_and_write": false, 00:08:13.437 "abort": true, 00:08:13.437 "seek_hole": false, 00:08:13.437 "seek_data": false, 00:08:13.437 "copy": true, 00:08:13.437 "nvme_iov_md": false 00:08:13.437 }, 00:08:13.437 "memory_domains": [ 00:08:13.437 { 00:08:13.437 "dma_device_id": "system", 00:08:13.437 "dma_device_type": 1 00:08:13.437 }, 00:08:13.437 { 00:08:13.437 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:13.437 "dma_device_type": 2 00:08:13.437 } 00:08:13.437 ], 00:08:13.437 "driver_specific": {} 00:08:13.437 } 00:08:13.437 ] 00:08:13.437 19:06:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.437 19:06:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:13.437 19:06:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:08:13.437 19:06:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:13.437 19:06:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:13.437 19:06:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:13.437 19:06:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:13.437 19:06:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:13.437 19:06:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:13.437 19:06:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:13.437 19:06:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:13.437 19:06:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:13.437 19:06:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:13.437 19:06:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:13.437 19:06:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.437 19:06:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:13.437 19:06:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.695 19:06:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:13.695 "name": "Existed_Raid", 00:08:13.695 "uuid": "19ed539e-b7bd-46d4-86ca-099cf5247eb3", 00:08:13.695 "strip_size_kb": 64, 00:08:13.695 "state": "configuring", 00:08:13.695 "raid_level": "raid0", 00:08:13.695 "superblock": true, 00:08:13.695 "num_base_bdevs": 2, 00:08:13.695 "num_base_bdevs_discovered": 1, 00:08:13.695 "num_base_bdevs_operational": 2, 00:08:13.695 "base_bdevs_list": [ 00:08:13.695 { 00:08:13.695 "name": "BaseBdev1", 00:08:13.695 "uuid": "101b8e00-7c0b-4f13-941f-6b3b73155436", 00:08:13.695 "is_configured": true, 00:08:13.695 "data_offset": 2048, 00:08:13.695 "data_size": 63488 00:08:13.695 }, 00:08:13.695 { 00:08:13.695 "name": "BaseBdev2", 00:08:13.695 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:13.695 "is_configured": false, 00:08:13.695 "data_offset": 0, 00:08:13.695 "data_size": 0 00:08:13.695 } 00:08:13.695 ] 00:08:13.695 }' 00:08:13.695 19:06:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:13.695 19:06:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:13.953 19:06:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:13.954 19:06:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.954 19:06:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:13.954 [2024-11-27 19:06:23.482181] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:13.954 [2024-11-27 19:06:23.482240] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:13.954 19:06:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.954 19:06:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:13.954 19:06:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.954 19:06:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:13.954 [2024-11-27 19:06:23.494238] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:13.954 [2024-11-27 19:06:23.496394] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:13.954 [2024-11-27 19:06:23.496442] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:13.954 19:06:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.954 19:06:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:13.954 19:06:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:13.954 19:06:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:08:13.954 19:06:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:13.954 19:06:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:13.954 19:06:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:13.954 19:06:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:13.954 19:06:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:13.954 19:06:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:13.954 19:06:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:13.954 19:06:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:13.954 19:06:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:13.954 19:06:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:13.954 19:06:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:13.954 19:06:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.954 19:06:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:13.954 19:06:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.954 19:06:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:13.954 "name": "Existed_Raid", 00:08:13.954 "uuid": "68edf978-a0de-4c31-aea2-a6294d1763a6", 00:08:13.954 "strip_size_kb": 64, 00:08:13.954 "state": "configuring", 00:08:13.954 "raid_level": "raid0", 00:08:13.954 "superblock": true, 00:08:13.954 "num_base_bdevs": 2, 00:08:13.954 "num_base_bdevs_discovered": 1, 00:08:13.954 "num_base_bdevs_operational": 2, 00:08:13.954 "base_bdevs_list": [ 00:08:13.954 { 00:08:13.954 "name": "BaseBdev1", 00:08:13.954 "uuid": "101b8e00-7c0b-4f13-941f-6b3b73155436", 00:08:13.954 "is_configured": true, 00:08:13.954 "data_offset": 2048, 00:08:13.954 "data_size": 63488 00:08:13.954 }, 00:08:13.954 { 00:08:13.954 "name": "BaseBdev2", 00:08:13.954 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:13.954 "is_configured": false, 00:08:13.954 "data_offset": 0, 00:08:13.954 "data_size": 0 00:08:13.954 } 00:08:13.954 ] 00:08:13.954 }' 00:08:13.954 19:06:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:13.954 19:06:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:14.521 19:06:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:14.521 19:06:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.521 19:06:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:14.521 [2024-11-27 19:06:23.981179] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:14.521 [2024-11-27 19:06:23.981468] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:14.521 [2024-11-27 19:06:23.981489] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:14.521 [2024-11-27 19:06:23.981890] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:14.521 BaseBdev2 00:08:14.521 [2024-11-27 19:06:23.982077] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:14.521 [2024-11-27 19:06:23.982093] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:14.521 [2024-11-27 19:06:23.982256] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:14.521 19:06:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.521 19:06:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:14.521 19:06:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:14.521 19:06:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:14.521 19:06:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:14.521 19:06:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:14.521 19:06:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:14.521 19:06:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:14.521 19:06:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.521 19:06:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:14.521 19:06:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.521 19:06:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:14.521 19:06:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.521 19:06:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:14.521 [ 00:08:14.521 { 00:08:14.521 "name": "BaseBdev2", 00:08:14.521 "aliases": [ 00:08:14.521 "caf8c7fb-cec3-40f5-b9a3-a0dd8fa79cc9" 00:08:14.521 ], 00:08:14.521 "product_name": "Malloc disk", 00:08:14.521 "block_size": 512, 00:08:14.521 "num_blocks": 65536, 00:08:14.521 "uuid": "caf8c7fb-cec3-40f5-b9a3-a0dd8fa79cc9", 00:08:14.521 "assigned_rate_limits": { 00:08:14.521 "rw_ios_per_sec": 0, 00:08:14.521 "rw_mbytes_per_sec": 0, 00:08:14.521 "r_mbytes_per_sec": 0, 00:08:14.521 "w_mbytes_per_sec": 0 00:08:14.521 }, 00:08:14.521 "claimed": true, 00:08:14.521 "claim_type": "exclusive_write", 00:08:14.521 "zoned": false, 00:08:14.521 "supported_io_types": { 00:08:14.521 "read": true, 00:08:14.521 "write": true, 00:08:14.521 "unmap": true, 00:08:14.521 "flush": true, 00:08:14.521 "reset": true, 00:08:14.521 "nvme_admin": false, 00:08:14.521 "nvme_io": false, 00:08:14.521 "nvme_io_md": false, 00:08:14.521 "write_zeroes": true, 00:08:14.521 "zcopy": true, 00:08:14.521 "get_zone_info": false, 00:08:14.521 "zone_management": false, 00:08:14.521 "zone_append": false, 00:08:14.521 "compare": false, 00:08:14.521 "compare_and_write": false, 00:08:14.521 "abort": true, 00:08:14.521 "seek_hole": false, 00:08:14.521 "seek_data": false, 00:08:14.521 "copy": true, 00:08:14.521 "nvme_iov_md": false 00:08:14.521 }, 00:08:14.521 "memory_domains": [ 00:08:14.521 { 00:08:14.521 "dma_device_id": "system", 00:08:14.521 "dma_device_type": 1 00:08:14.521 }, 00:08:14.521 { 00:08:14.521 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:14.521 "dma_device_type": 2 00:08:14.521 } 00:08:14.521 ], 00:08:14.521 "driver_specific": {} 00:08:14.521 } 00:08:14.521 ] 00:08:14.521 19:06:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.521 19:06:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:14.521 19:06:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:14.521 19:06:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:14.521 19:06:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:08:14.521 19:06:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:14.521 19:06:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:14.521 19:06:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:14.521 19:06:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:14.521 19:06:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:14.521 19:06:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:14.521 19:06:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:14.521 19:06:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:14.521 19:06:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:14.521 19:06:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:14.521 19:06:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:14.521 19:06:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.521 19:06:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:14.521 19:06:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.521 19:06:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:14.522 "name": "Existed_Raid", 00:08:14.522 "uuid": "68edf978-a0de-4c31-aea2-a6294d1763a6", 00:08:14.522 "strip_size_kb": 64, 00:08:14.522 "state": "online", 00:08:14.522 "raid_level": "raid0", 00:08:14.522 "superblock": true, 00:08:14.522 "num_base_bdevs": 2, 00:08:14.522 "num_base_bdevs_discovered": 2, 00:08:14.522 "num_base_bdevs_operational": 2, 00:08:14.522 "base_bdevs_list": [ 00:08:14.522 { 00:08:14.522 "name": "BaseBdev1", 00:08:14.522 "uuid": "101b8e00-7c0b-4f13-941f-6b3b73155436", 00:08:14.522 "is_configured": true, 00:08:14.522 "data_offset": 2048, 00:08:14.522 "data_size": 63488 00:08:14.522 }, 00:08:14.522 { 00:08:14.522 "name": "BaseBdev2", 00:08:14.522 "uuid": "caf8c7fb-cec3-40f5-b9a3-a0dd8fa79cc9", 00:08:14.522 "is_configured": true, 00:08:14.522 "data_offset": 2048, 00:08:14.522 "data_size": 63488 00:08:14.522 } 00:08:14.522 ] 00:08:14.522 }' 00:08:14.522 19:06:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:14.522 19:06:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:14.780 19:06:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:14.780 19:06:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:14.780 19:06:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:14.780 19:06:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:14.780 19:06:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:14.780 19:06:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:14.780 19:06:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:14.780 19:06:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.780 19:06:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:14.780 19:06:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:14.780 [2024-11-27 19:06:24.396831] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:15.040 19:06:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.040 19:06:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:15.040 "name": "Existed_Raid", 00:08:15.040 "aliases": [ 00:08:15.040 "68edf978-a0de-4c31-aea2-a6294d1763a6" 00:08:15.040 ], 00:08:15.040 "product_name": "Raid Volume", 00:08:15.040 "block_size": 512, 00:08:15.040 "num_blocks": 126976, 00:08:15.040 "uuid": "68edf978-a0de-4c31-aea2-a6294d1763a6", 00:08:15.040 "assigned_rate_limits": { 00:08:15.040 "rw_ios_per_sec": 0, 00:08:15.040 "rw_mbytes_per_sec": 0, 00:08:15.040 "r_mbytes_per_sec": 0, 00:08:15.040 "w_mbytes_per_sec": 0 00:08:15.040 }, 00:08:15.040 "claimed": false, 00:08:15.040 "zoned": false, 00:08:15.040 "supported_io_types": { 00:08:15.040 "read": true, 00:08:15.040 "write": true, 00:08:15.040 "unmap": true, 00:08:15.040 "flush": true, 00:08:15.040 "reset": true, 00:08:15.040 "nvme_admin": false, 00:08:15.040 "nvme_io": false, 00:08:15.040 "nvme_io_md": false, 00:08:15.040 "write_zeroes": true, 00:08:15.040 "zcopy": false, 00:08:15.040 "get_zone_info": false, 00:08:15.040 "zone_management": false, 00:08:15.040 "zone_append": false, 00:08:15.040 "compare": false, 00:08:15.040 "compare_and_write": false, 00:08:15.040 "abort": false, 00:08:15.040 "seek_hole": false, 00:08:15.040 "seek_data": false, 00:08:15.040 "copy": false, 00:08:15.040 "nvme_iov_md": false 00:08:15.040 }, 00:08:15.040 "memory_domains": [ 00:08:15.040 { 00:08:15.040 "dma_device_id": "system", 00:08:15.040 "dma_device_type": 1 00:08:15.040 }, 00:08:15.040 { 00:08:15.040 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:15.040 "dma_device_type": 2 00:08:15.040 }, 00:08:15.040 { 00:08:15.040 "dma_device_id": "system", 00:08:15.040 "dma_device_type": 1 00:08:15.040 }, 00:08:15.040 { 00:08:15.040 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:15.040 "dma_device_type": 2 00:08:15.040 } 00:08:15.040 ], 00:08:15.040 "driver_specific": { 00:08:15.040 "raid": { 00:08:15.040 "uuid": "68edf978-a0de-4c31-aea2-a6294d1763a6", 00:08:15.040 "strip_size_kb": 64, 00:08:15.040 "state": "online", 00:08:15.040 "raid_level": "raid0", 00:08:15.040 "superblock": true, 00:08:15.040 "num_base_bdevs": 2, 00:08:15.040 "num_base_bdevs_discovered": 2, 00:08:15.040 "num_base_bdevs_operational": 2, 00:08:15.040 "base_bdevs_list": [ 00:08:15.040 { 00:08:15.040 "name": "BaseBdev1", 00:08:15.040 "uuid": "101b8e00-7c0b-4f13-941f-6b3b73155436", 00:08:15.040 "is_configured": true, 00:08:15.040 "data_offset": 2048, 00:08:15.040 "data_size": 63488 00:08:15.040 }, 00:08:15.040 { 00:08:15.040 "name": "BaseBdev2", 00:08:15.040 "uuid": "caf8c7fb-cec3-40f5-b9a3-a0dd8fa79cc9", 00:08:15.040 "is_configured": true, 00:08:15.040 "data_offset": 2048, 00:08:15.040 "data_size": 63488 00:08:15.040 } 00:08:15.040 ] 00:08:15.040 } 00:08:15.040 } 00:08:15.040 }' 00:08:15.040 19:06:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:15.040 19:06:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:15.040 BaseBdev2' 00:08:15.040 19:06:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:15.040 19:06:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:15.040 19:06:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:15.040 19:06:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:15.040 19:06:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:15.040 19:06:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.040 19:06:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:15.040 19:06:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.040 19:06:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:15.040 19:06:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:15.040 19:06:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:15.040 19:06:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:15.040 19:06:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:15.040 19:06:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.040 19:06:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:15.040 19:06:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.040 19:06:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:15.040 19:06:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:15.040 19:06:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:15.040 19:06:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.040 19:06:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:15.040 [2024-11-27 19:06:24.588200] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:15.040 [2024-11-27 19:06:24.588241] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:15.040 [2024-11-27 19:06:24.588298] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:15.299 19:06:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.299 19:06:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:15.299 19:06:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:08:15.299 19:06:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:15.299 19:06:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:08:15.299 19:06:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:15.299 19:06:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:08:15.299 19:06:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:15.299 19:06:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:15.299 19:06:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:15.299 19:06:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:15.299 19:06:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:15.299 19:06:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:15.299 19:06:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:15.299 19:06:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:15.299 19:06:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:15.299 19:06:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:15.299 19:06:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:15.299 19:06:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.299 19:06:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:15.299 19:06:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.299 19:06:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:15.299 "name": "Existed_Raid", 00:08:15.299 "uuid": "68edf978-a0de-4c31-aea2-a6294d1763a6", 00:08:15.299 "strip_size_kb": 64, 00:08:15.299 "state": "offline", 00:08:15.299 "raid_level": "raid0", 00:08:15.299 "superblock": true, 00:08:15.299 "num_base_bdevs": 2, 00:08:15.299 "num_base_bdevs_discovered": 1, 00:08:15.299 "num_base_bdevs_operational": 1, 00:08:15.299 "base_bdevs_list": [ 00:08:15.299 { 00:08:15.299 "name": null, 00:08:15.299 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:15.299 "is_configured": false, 00:08:15.299 "data_offset": 0, 00:08:15.299 "data_size": 63488 00:08:15.299 }, 00:08:15.299 { 00:08:15.299 "name": "BaseBdev2", 00:08:15.299 "uuid": "caf8c7fb-cec3-40f5-b9a3-a0dd8fa79cc9", 00:08:15.299 "is_configured": true, 00:08:15.299 "data_offset": 2048, 00:08:15.299 "data_size": 63488 00:08:15.299 } 00:08:15.299 ] 00:08:15.299 }' 00:08:15.299 19:06:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:15.299 19:06:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:15.558 19:06:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:15.558 19:06:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:15.558 19:06:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:15.558 19:06:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.558 19:06:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:15.558 19:06:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:15.558 19:06:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.558 19:06:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:15.558 19:06:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:15.558 19:06:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:15.558 19:06:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.558 19:06:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:15.558 [2024-11-27 19:06:25.182596] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:15.558 [2024-11-27 19:06:25.182680] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:15.817 19:06:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.817 19:06:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:15.817 19:06:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:15.817 19:06:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:15.817 19:06:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:15.817 19:06:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.817 19:06:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:15.817 19:06:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.817 19:06:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:15.817 19:06:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:15.817 19:06:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:08:15.817 19:06:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 61048 00:08:15.817 19:06:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 61048 ']' 00:08:15.817 19:06:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 61048 00:08:15.817 19:06:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:08:15.817 19:06:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:15.817 19:06:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61048 00:08:15.817 19:06:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:15.817 19:06:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:15.817 killing process with pid 61048 00:08:15.817 19:06:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61048' 00:08:15.817 19:06:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 61048 00:08:15.817 [2024-11-27 19:06:25.381994] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:15.817 19:06:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 61048 00:08:15.817 [2024-11-27 19:06:25.399355] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:17.210 19:06:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:08:17.210 00:08:17.210 real 0m5.012s 00:08:17.210 user 0m7.001s 00:08:17.210 sys 0m0.951s 00:08:17.210 19:06:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:17.210 19:06:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:17.210 ************************************ 00:08:17.210 END TEST raid_state_function_test_sb 00:08:17.210 ************************************ 00:08:17.210 19:06:26 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:08:17.210 19:06:26 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:17.210 19:06:26 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:17.210 19:06:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:17.210 ************************************ 00:08:17.210 START TEST raid_superblock_test 00:08:17.210 ************************************ 00:08:17.210 19:06:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 2 00:08:17.210 19:06:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:08:17.210 19:06:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:08:17.210 19:06:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:08:17.210 19:06:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:08:17.210 19:06:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:08:17.210 19:06:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:08:17.210 19:06:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:08:17.210 19:06:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:08:17.210 19:06:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:08:17.210 19:06:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:08:17.210 19:06:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:08:17.210 19:06:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:08:17.210 19:06:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:08:17.210 19:06:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:08:17.210 19:06:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:08:17.210 19:06:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:08:17.210 19:06:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=61295 00:08:17.210 19:06:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:08:17.210 19:06:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 61295 00:08:17.210 19:06:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 61295 ']' 00:08:17.210 19:06:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:17.210 19:06:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:17.210 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:17.210 19:06:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:17.210 19:06:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:17.210 19:06:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.210 [2024-11-27 19:06:26.778151] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:08:17.210 [2024-11-27 19:06:26.778272] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61295 ] 00:08:17.469 [2024-11-27 19:06:26.959954] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:17.469 [2024-11-27 19:06:27.098857] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.728 [2024-11-27 19:06:27.331727] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:17.728 [2024-11-27 19:06:27.331790] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:17.986 19:06:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:17.986 19:06:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:08:17.986 19:06:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:08:17.986 19:06:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:17.986 19:06:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:08:17.986 19:06:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:08:17.986 19:06:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:17.986 19:06:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:17.986 19:06:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:17.986 19:06:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:17.986 19:06:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:08:17.986 19:06:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.986 19:06:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.245 malloc1 00:08:18.245 19:06:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.245 19:06:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:18.245 19:06:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.245 19:06:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.245 [2024-11-27 19:06:27.647166] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:18.245 [2024-11-27 19:06:27.647234] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:18.245 [2024-11-27 19:06:27.647258] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:18.245 [2024-11-27 19:06:27.647268] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:18.245 [2024-11-27 19:06:27.649683] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:18.245 [2024-11-27 19:06:27.649733] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:18.245 pt1 00:08:18.245 19:06:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.245 19:06:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:18.245 19:06:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:18.245 19:06:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:08:18.245 19:06:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:08:18.245 19:06:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:18.245 19:06:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:18.245 19:06:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:18.245 19:06:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:18.245 19:06:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:08:18.245 19:06:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.245 19:06:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.245 malloc2 00:08:18.245 19:06:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.245 19:06:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:18.245 19:06:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.245 19:06:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.245 [2024-11-27 19:06:27.710131] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:18.245 [2024-11-27 19:06:27.710187] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:18.245 [2024-11-27 19:06:27.710216] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:08:18.245 [2024-11-27 19:06:27.710225] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:18.245 [2024-11-27 19:06:27.712618] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:18.245 [2024-11-27 19:06:27.712652] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:18.245 pt2 00:08:18.245 19:06:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.245 19:06:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:18.245 19:06:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:18.245 19:06:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:08:18.245 19:06:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.245 19:06:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.245 [2024-11-27 19:06:27.722173] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:18.245 [2024-11-27 19:06:27.724247] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:18.245 [2024-11-27 19:06:27.724413] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:18.245 [2024-11-27 19:06:27.724426] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:18.245 [2024-11-27 19:06:27.724685] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:18.246 [2024-11-27 19:06:27.724859] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:18.246 [2024-11-27 19:06:27.724876] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:08:18.246 [2024-11-27 19:06:27.725014] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:18.246 19:06:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.246 19:06:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:08:18.246 19:06:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:18.246 19:06:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:18.246 19:06:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:18.246 19:06:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:18.246 19:06:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:18.246 19:06:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:18.246 19:06:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:18.246 19:06:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:18.246 19:06:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:18.246 19:06:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:18.246 19:06:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:18.246 19:06:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.246 19:06:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.246 19:06:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.246 19:06:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:18.246 "name": "raid_bdev1", 00:08:18.246 "uuid": "43079624-5009-4d45-a342-83323d7e483c", 00:08:18.246 "strip_size_kb": 64, 00:08:18.246 "state": "online", 00:08:18.246 "raid_level": "raid0", 00:08:18.246 "superblock": true, 00:08:18.246 "num_base_bdevs": 2, 00:08:18.246 "num_base_bdevs_discovered": 2, 00:08:18.246 "num_base_bdevs_operational": 2, 00:08:18.246 "base_bdevs_list": [ 00:08:18.246 { 00:08:18.246 "name": "pt1", 00:08:18.246 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:18.246 "is_configured": true, 00:08:18.246 "data_offset": 2048, 00:08:18.246 "data_size": 63488 00:08:18.246 }, 00:08:18.246 { 00:08:18.246 "name": "pt2", 00:08:18.246 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:18.246 "is_configured": true, 00:08:18.246 "data_offset": 2048, 00:08:18.246 "data_size": 63488 00:08:18.246 } 00:08:18.246 ] 00:08:18.246 }' 00:08:18.246 19:06:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:18.246 19:06:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.813 19:06:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:08:18.813 19:06:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:18.813 19:06:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:18.813 19:06:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:18.813 19:06:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:18.813 19:06:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:18.813 19:06:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:18.813 19:06:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:18.813 19:06:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.813 19:06:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.813 [2024-11-27 19:06:28.153650] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:18.813 19:06:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.813 19:06:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:18.813 "name": "raid_bdev1", 00:08:18.813 "aliases": [ 00:08:18.813 "43079624-5009-4d45-a342-83323d7e483c" 00:08:18.813 ], 00:08:18.813 "product_name": "Raid Volume", 00:08:18.813 "block_size": 512, 00:08:18.813 "num_blocks": 126976, 00:08:18.813 "uuid": "43079624-5009-4d45-a342-83323d7e483c", 00:08:18.813 "assigned_rate_limits": { 00:08:18.813 "rw_ios_per_sec": 0, 00:08:18.813 "rw_mbytes_per_sec": 0, 00:08:18.813 "r_mbytes_per_sec": 0, 00:08:18.813 "w_mbytes_per_sec": 0 00:08:18.813 }, 00:08:18.813 "claimed": false, 00:08:18.813 "zoned": false, 00:08:18.813 "supported_io_types": { 00:08:18.813 "read": true, 00:08:18.813 "write": true, 00:08:18.813 "unmap": true, 00:08:18.813 "flush": true, 00:08:18.813 "reset": true, 00:08:18.813 "nvme_admin": false, 00:08:18.813 "nvme_io": false, 00:08:18.813 "nvme_io_md": false, 00:08:18.813 "write_zeroes": true, 00:08:18.813 "zcopy": false, 00:08:18.813 "get_zone_info": false, 00:08:18.813 "zone_management": false, 00:08:18.813 "zone_append": false, 00:08:18.813 "compare": false, 00:08:18.813 "compare_and_write": false, 00:08:18.813 "abort": false, 00:08:18.813 "seek_hole": false, 00:08:18.813 "seek_data": false, 00:08:18.813 "copy": false, 00:08:18.813 "nvme_iov_md": false 00:08:18.813 }, 00:08:18.813 "memory_domains": [ 00:08:18.813 { 00:08:18.813 "dma_device_id": "system", 00:08:18.813 "dma_device_type": 1 00:08:18.813 }, 00:08:18.813 { 00:08:18.813 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:18.813 "dma_device_type": 2 00:08:18.813 }, 00:08:18.813 { 00:08:18.813 "dma_device_id": "system", 00:08:18.813 "dma_device_type": 1 00:08:18.813 }, 00:08:18.813 { 00:08:18.813 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:18.813 "dma_device_type": 2 00:08:18.813 } 00:08:18.813 ], 00:08:18.813 "driver_specific": { 00:08:18.813 "raid": { 00:08:18.813 "uuid": "43079624-5009-4d45-a342-83323d7e483c", 00:08:18.813 "strip_size_kb": 64, 00:08:18.813 "state": "online", 00:08:18.813 "raid_level": "raid0", 00:08:18.813 "superblock": true, 00:08:18.813 "num_base_bdevs": 2, 00:08:18.813 "num_base_bdevs_discovered": 2, 00:08:18.813 "num_base_bdevs_operational": 2, 00:08:18.813 "base_bdevs_list": [ 00:08:18.813 { 00:08:18.813 "name": "pt1", 00:08:18.813 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:18.813 "is_configured": true, 00:08:18.813 "data_offset": 2048, 00:08:18.813 "data_size": 63488 00:08:18.813 }, 00:08:18.813 { 00:08:18.813 "name": "pt2", 00:08:18.813 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:18.813 "is_configured": true, 00:08:18.813 "data_offset": 2048, 00:08:18.814 "data_size": 63488 00:08:18.814 } 00:08:18.814 ] 00:08:18.814 } 00:08:18.814 } 00:08:18.814 }' 00:08:18.814 19:06:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:18.814 19:06:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:18.814 pt2' 00:08:18.814 19:06:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:18.814 19:06:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:18.814 19:06:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:18.814 19:06:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:18.814 19:06:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.814 19:06:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.814 19:06:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:18.814 19:06:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.814 19:06:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:18.814 19:06:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:18.814 19:06:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:18.814 19:06:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:18.814 19:06:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.814 19:06:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.814 19:06:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:18.814 19:06:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.814 19:06:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:18.814 19:06:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:18.814 19:06:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:08:18.814 19:06:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:18.814 19:06:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.814 19:06:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.814 [2024-11-27 19:06:28.365216] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:18.814 19:06:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.814 19:06:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=43079624-5009-4d45-a342-83323d7e483c 00:08:18.814 19:06:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 43079624-5009-4d45-a342-83323d7e483c ']' 00:08:18.814 19:06:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:18.814 19:06:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.814 19:06:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.814 [2024-11-27 19:06:28.392906] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:18.814 [2024-11-27 19:06:28.392928] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:18.814 [2024-11-27 19:06:28.393001] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:18.814 [2024-11-27 19:06:28.393044] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:18.814 [2024-11-27 19:06:28.393056] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:08:18.814 19:06:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.814 19:06:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:18.814 19:06:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:08:18.814 19:06:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.814 19:06:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.814 19:06:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.814 19:06:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:08:18.814 19:06:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:08:18.814 19:06:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:18.814 19:06:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:08:18.814 19:06:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.814 19:06:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.073 19:06:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.073 19:06:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:19.073 19:06:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:08:19.073 19:06:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.073 19:06:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.073 19:06:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.073 19:06:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:19.073 19:06:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:08:19.073 19:06:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.073 19:06:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.073 19:06:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.073 19:06:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:08:19.073 19:06:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:19.073 19:06:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:08:19.073 19:06:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:19.073 19:06:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:08:19.073 19:06:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:19.073 19:06:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:08:19.073 19:06:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:19.073 19:06:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:19.073 19:06:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.073 19:06:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.073 [2024-11-27 19:06:28.516790] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:19.073 [2024-11-27 19:06:28.518903] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:19.073 [2024-11-27 19:06:28.518970] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:08:19.073 [2024-11-27 19:06:28.519013] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:08:19.073 [2024-11-27 19:06:28.519027] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:19.073 [2024-11-27 19:06:28.519040] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:08:19.073 request: 00:08:19.073 { 00:08:19.073 "name": "raid_bdev1", 00:08:19.074 "raid_level": "raid0", 00:08:19.074 "base_bdevs": [ 00:08:19.074 "malloc1", 00:08:19.074 "malloc2" 00:08:19.074 ], 00:08:19.074 "strip_size_kb": 64, 00:08:19.074 "superblock": false, 00:08:19.074 "method": "bdev_raid_create", 00:08:19.074 "req_id": 1 00:08:19.074 } 00:08:19.074 Got JSON-RPC error response 00:08:19.074 response: 00:08:19.074 { 00:08:19.074 "code": -17, 00:08:19.074 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:19.074 } 00:08:19.074 19:06:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:19.074 19:06:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:08:19.074 19:06:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:19.074 19:06:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:19.074 19:06:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:19.074 19:06:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:19.074 19:06:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.074 19:06:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:08:19.074 19:06:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.074 19:06:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.074 19:06:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:08:19.074 19:06:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:08:19.074 19:06:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:19.074 19:06:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.074 19:06:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.074 [2024-11-27 19:06:28.572764] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:19.074 [2024-11-27 19:06:28.572852] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:19.074 [2024-11-27 19:06:28.572871] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:08:19.074 [2024-11-27 19:06:28.572883] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:19.074 [2024-11-27 19:06:28.575473] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:19.074 [2024-11-27 19:06:28.575512] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:19.074 [2024-11-27 19:06:28.575617] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:19.074 [2024-11-27 19:06:28.575702] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:19.074 pt1 00:08:19.074 19:06:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.074 19:06:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:08:19.074 19:06:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:19.074 19:06:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:19.074 19:06:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:19.074 19:06:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:19.074 19:06:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:19.074 19:06:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:19.074 19:06:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:19.074 19:06:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:19.074 19:06:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:19.074 19:06:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:19.074 19:06:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.074 19:06:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.074 19:06:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:19.074 19:06:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.074 19:06:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:19.074 "name": "raid_bdev1", 00:08:19.074 "uuid": "43079624-5009-4d45-a342-83323d7e483c", 00:08:19.074 "strip_size_kb": 64, 00:08:19.074 "state": "configuring", 00:08:19.074 "raid_level": "raid0", 00:08:19.074 "superblock": true, 00:08:19.074 "num_base_bdevs": 2, 00:08:19.074 "num_base_bdevs_discovered": 1, 00:08:19.074 "num_base_bdevs_operational": 2, 00:08:19.074 "base_bdevs_list": [ 00:08:19.074 { 00:08:19.074 "name": "pt1", 00:08:19.074 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:19.074 "is_configured": true, 00:08:19.074 "data_offset": 2048, 00:08:19.074 "data_size": 63488 00:08:19.074 }, 00:08:19.074 { 00:08:19.074 "name": null, 00:08:19.074 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:19.074 "is_configured": false, 00:08:19.074 "data_offset": 2048, 00:08:19.074 "data_size": 63488 00:08:19.074 } 00:08:19.074 ] 00:08:19.074 }' 00:08:19.074 19:06:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:19.074 19:06:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.642 19:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:08:19.642 19:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:08:19.642 19:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:19.642 19:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:19.642 19:06:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.642 19:06:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.642 [2024-11-27 19:06:29.051903] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:19.642 [2024-11-27 19:06:29.051992] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:19.642 [2024-11-27 19:06:29.052018] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:08:19.642 [2024-11-27 19:06:29.052032] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:19.642 [2024-11-27 19:06:29.052582] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:19.642 [2024-11-27 19:06:29.052612] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:19.642 [2024-11-27 19:06:29.052727] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:19.642 [2024-11-27 19:06:29.052766] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:19.642 [2024-11-27 19:06:29.052905] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:19.642 [2024-11-27 19:06:29.052921] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:19.642 [2024-11-27 19:06:29.053190] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:19.642 [2024-11-27 19:06:29.053348] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:19.642 [2024-11-27 19:06:29.053367] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:19.642 [2024-11-27 19:06:29.053525] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:19.642 pt2 00:08:19.642 19:06:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.642 19:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:19.642 19:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:19.642 19:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:08:19.642 19:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:19.642 19:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:19.642 19:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:19.642 19:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:19.642 19:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:19.642 19:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:19.642 19:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:19.642 19:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:19.642 19:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:19.642 19:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:19.642 19:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:19.642 19:06:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.642 19:06:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.642 19:06:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.642 19:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:19.642 "name": "raid_bdev1", 00:08:19.642 "uuid": "43079624-5009-4d45-a342-83323d7e483c", 00:08:19.642 "strip_size_kb": 64, 00:08:19.642 "state": "online", 00:08:19.642 "raid_level": "raid0", 00:08:19.642 "superblock": true, 00:08:19.642 "num_base_bdevs": 2, 00:08:19.642 "num_base_bdevs_discovered": 2, 00:08:19.642 "num_base_bdevs_operational": 2, 00:08:19.642 "base_bdevs_list": [ 00:08:19.642 { 00:08:19.642 "name": "pt1", 00:08:19.642 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:19.642 "is_configured": true, 00:08:19.642 "data_offset": 2048, 00:08:19.642 "data_size": 63488 00:08:19.642 }, 00:08:19.642 { 00:08:19.642 "name": "pt2", 00:08:19.642 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:19.642 "is_configured": true, 00:08:19.642 "data_offset": 2048, 00:08:19.642 "data_size": 63488 00:08:19.642 } 00:08:19.642 ] 00:08:19.642 }' 00:08:19.642 19:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:19.642 19:06:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.900 19:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:08:19.900 19:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:19.900 19:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:19.900 19:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:19.900 19:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:19.900 19:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:19.900 19:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:19.900 19:06:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.900 19:06:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.901 19:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:19.901 [2024-11-27 19:06:29.507383] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:19.901 19:06:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.159 19:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:20.159 "name": "raid_bdev1", 00:08:20.159 "aliases": [ 00:08:20.159 "43079624-5009-4d45-a342-83323d7e483c" 00:08:20.159 ], 00:08:20.159 "product_name": "Raid Volume", 00:08:20.159 "block_size": 512, 00:08:20.159 "num_blocks": 126976, 00:08:20.159 "uuid": "43079624-5009-4d45-a342-83323d7e483c", 00:08:20.159 "assigned_rate_limits": { 00:08:20.159 "rw_ios_per_sec": 0, 00:08:20.159 "rw_mbytes_per_sec": 0, 00:08:20.159 "r_mbytes_per_sec": 0, 00:08:20.159 "w_mbytes_per_sec": 0 00:08:20.159 }, 00:08:20.159 "claimed": false, 00:08:20.159 "zoned": false, 00:08:20.159 "supported_io_types": { 00:08:20.159 "read": true, 00:08:20.159 "write": true, 00:08:20.159 "unmap": true, 00:08:20.159 "flush": true, 00:08:20.159 "reset": true, 00:08:20.159 "nvme_admin": false, 00:08:20.159 "nvme_io": false, 00:08:20.159 "nvme_io_md": false, 00:08:20.159 "write_zeroes": true, 00:08:20.159 "zcopy": false, 00:08:20.159 "get_zone_info": false, 00:08:20.159 "zone_management": false, 00:08:20.159 "zone_append": false, 00:08:20.159 "compare": false, 00:08:20.159 "compare_and_write": false, 00:08:20.159 "abort": false, 00:08:20.159 "seek_hole": false, 00:08:20.159 "seek_data": false, 00:08:20.159 "copy": false, 00:08:20.159 "nvme_iov_md": false 00:08:20.159 }, 00:08:20.159 "memory_domains": [ 00:08:20.159 { 00:08:20.159 "dma_device_id": "system", 00:08:20.159 "dma_device_type": 1 00:08:20.159 }, 00:08:20.159 { 00:08:20.159 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:20.159 "dma_device_type": 2 00:08:20.159 }, 00:08:20.159 { 00:08:20.159 "dma_device_id": "system", 00:08:20.159 "dma_device_type": 1 00:08:20.159 }, 00:08:20.159 { 00:08:20.159 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:20.159 "dma_device_type": 2 00:08:20.159 } 00:08:20.159 ], 00:08:20.159 "driver_specific": { 00:08:20.159 "raid": { 00:08:20.159 "uuid": "43079624-5009-4d45-a342-83323d7e483c", 00:08:20.159 "strip_size_kb": 64, 00:08:20.159 "state": "online", 00:08:20.159 "raid_level": "raid0", 00:08:20.159 "superblock": true, 00:08:20.159 "num_base_bdevs": 2, 00:08:20.159 "num_base_bdevs_discovered": 2, 00:08:20.159 "num_base_bdevs_operational": 2, 00:08:20.159 "base_bdevs_list": [ 00:08:20.159 { 00:08:20.159 "name": "pt1", 00:08:20.159 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:20.159 "is_configured": true, 00:08:20.159 "data_offset": 2048, 00:08:20.159 "data_size": 63488 00:08:20.159 }, 00:08:20.159 { 00:08:20.159 "name": "pt2", 00:08:20.159 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:20.159 "is_configured": true, 00:08:20.159 "data_offset": 2048, 00:08:20.159 "data_size": 63488 00:08:20.159 } 00:08:20.159 ] 00:08:20.159 } 00:08:20.159 } 00:08:20.159 }' 00:08:20.159 19:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:20.159 19:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:20.159 pt2' 00:08:20.159 19:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:20.159 19:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:20.159 19:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:20.159 19:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:20.159 19:06:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.159 19:06:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.159 19:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:20.159 19:06:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.159 19:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:20.159 19:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:20.159 19:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:20.159 19:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:20.159 19:06:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.159 19:06:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.160 19:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:20.160 19:06:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.160 19:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:20.160 19:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:20.160 19:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:20.160 19:06:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.160 19:06:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.160 19:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:08:20.160 [2024-11-27 19:06:29.703054] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:20.160 19:06:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.160 19:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 43079624-5009-4d45-a342-83323d7e483c '!=' 43079624-5009-4d45-a342-83323d7e483c ']' 00:08:20.160 19:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:08:20.160 19:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:20.160 19:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:20.160 19:06:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 61295 00:08:20.160 19:06:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 61295 ']' 00:08:20.160 19:06:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 61295 00:08:20.160 19:06:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:08:20.160 19:06:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:20.160 19:06:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61295 00:08:20.160 19:06:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:20.160 19:06:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:20.160 killing process with pid 61295 00:08:20.160 19:06:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61295' 00:08:20.160 19:06:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 61295 00:08:20.160 [2024-11-27 19:06:29.762585] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:20.160 [2024-11-27 19:06:29.762737] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:20.160 19:06:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 61295 00:08:20.160 [2024-11-27 19:06:29.762800] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:20.160 [2024-11-27 19:06:29.762815] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:20.419 [2024-11-27 19:06:29.984139] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:21.796 19:06:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:08:21.796 00:08:21.796 real 0m4.512s 00:08:21.796 user 0m6.068s 00:08:21.796 sys 0m0.863s 00:08:21.796 19:06:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:21.796 19:06:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.796 ************************************ 00:08:21.796 END TEST raid_superblock_test 00:08:21.796 ************************************ 00:08:21.796 19:06:31 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 2 read 00:08:21.796 19:06:31 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:21.796 19:06:31 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:21.796 19:06:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:21.796 ************************************ 00:08:21.796 START TEST raid_read_error_test 00:08:21.796 ************************************ 00:08:21.796 19:06:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 2 read 00:08:21.796 19:06:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:08:21.796 19:06:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:21.796 19:06:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:08:21.796 19:06:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:21.796 19:06:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:21.796 19:06:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:21.796 19:06:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:21.796 19:06:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:21.796 19:06:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:21.796 19:06:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:21.796 19:06:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:21.796 19:06:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:21.796 19:06:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:21.796 19:06:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:21.796 19:06:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:21.796 19:06:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:21.796 19:06:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:21.796 19:06:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:21.796 19:06:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:08:21.796 19:06:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:21.796 19:06:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:21.796 19:06:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:21.796 19:06:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.qYOQC2LAgT 00:08:21.796 19:06:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61512 00:08:21.796 19:06:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61512 00:08:21.796 19:06:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:21.796 19:06:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 61512 ']' 00:08:21.796 19:06:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:21.796 19:06:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:21.796 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:21.796 19:06:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:21.796 19:06:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:21.796 19:06:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.796 [2024-11-27 19:06:31.370127] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:08:21.796 [2024-11-27 19:06:31.370248] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61512 ] 00:08:22.054 [2024-11-27 19:06:31.548529] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:22.054 [2024-11-27 19:06:31.687038] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:22.313 [2024-11-27 19:06:31.918296] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:22.313 [2024-11-27 19:06:31.918378] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:22.572 19:06:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:22.572 19:06:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:22.572 19:06:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:22.572 19:06:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:22.572 19:06:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.572 19:06:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.831 BaseBdev1_malloc 00:08:22.831 19:06:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.831 19:06:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:22.831 19:06:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.831 19:06:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.831 true 00:08:22.831 19:06:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.831 19:06:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:22.831 19:06:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.831 19:06:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.831 [2024-11-27 19:06:32.245826] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:22.831 [2024-11-27 19:06:32.245888] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:22.831 [2024-11-27 19:06:32.245909] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:22.831 [2024-11-27 19:06:32.245922] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:22.831 [2024-11-27 19:06:32.248314] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:22.831 [2024-11-27 19:06:32.248353] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:22.831 BaseBdev1 00:08:22.831 19:06:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.831 19:06:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:22.831 19:06:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:22.832 19:06:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.832 19:06:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.832 BaseBdev2_malloc 00:08:22.832 19:06:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.832 19:06:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:22.832 19:06:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.832 19:06:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.832 true 00:08:22.832 19:06:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.832 19:06:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:22.832 19:06:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.832 19:06:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.832 [2024-11-27 19:06:32.315881] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:22.832 [2024-11-27 19:06:32.315941] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:22.832 [2024-11-27 19:06:32.315958] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:22.832 [2024-11-27 19:06:32.315969] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:22.832 [2024-11-27 19:06:32.318311] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:22.832 [2024-11-27 19:06:32.318347] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:22.832 BaseBdev2 00:08:22.832 19:06:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.832 19:06:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:22.832 19:06:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.832 19:06:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.832 [2024-11-27 19:06:32.327937] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:22.832 [2024-11-27 19:06:32.330034] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:22.832 [2024-11-27 19:06:32.330232] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:22.832 [2024-11-27 19:06:32.330256] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:22.832 [2024-11-27 19:06:32.330509] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:22.832 [2024-11-27 19:06:32.330723] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:22.832 [2024-11-27 19:06:32.330744] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:22.832 [2024-11-27 19:06:32.330907] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:22.832 19:06:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.832 19:06:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:08:22.832 19:06:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:22.832 19:06:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:22.832 19:06:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:22.832 19:06:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:22.832 19:06:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:22.832 19:06:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:22.832 19:06:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:22.832 19:06:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:22.832 19:06:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:22.832 19:06:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:22.832 19:06:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:22.832 19:06:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.832 19:06:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.832 19:06:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.832 19:06:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:22.832 "name": "raid_bdev1", 00:08:22.832 "uuid": "c00eac14-b192-441a-82c5-18ce9561313a", 00:08:22.832 "strip_size_kb": 64, 00:08:22.832 "state": "online", 00:08:22.832 "raid_level": "raid0", 00:08:22.832 "superblock": true, 00:08:22.832 "num_base_bdevs": 2, 00:08:22.832 "num_base_bdevs_discovered": 2, 00:08:22.832 "num_base_bdevs_operational": 2, 00:08:22.832 "base_bdevs_list": [ 00:08:22.832 { 00:08:22.832 "name": "BaseBdev1", 00:08:22.832 "uuid": "00e698be-11fc-5dcf-928f-7cd56e26be02", 00:08:22.832 "is_configured": true, 00:08:22.832 "data_offset": 2048, 00:08:22.832 "data_size": 63488 00:08:22.832 }, 00:08:22.832 { 00:08:22.832 "name": "BaseBdev2", 00:08:22.832 "uuid": "88be6c6e-88b8-5372-bf6b-d210997f804c", 00:08:22.832 "is_configured": true, 00:08:22.832 "data_offset": 2048, 00:08:22.832 "data_size": 63488 00:08:22.832 } 00:08:22.832 ] 00:08:22.832 }' 00:08:22.832 19:06:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:22.832 19:06:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.399 19:06:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:23.400 19:06:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:23.400 [2024-11-27 19:06:32.820574] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:08:24.336 19:06:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:08:24.336 19:06:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.336 19:06:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.336 19:06:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.336 19:06:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:24.336 19:06:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:08:24.336 19:06:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:08:24.336 19:06:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:08:24.336 19:06:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:24.336 19:06:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:24.336 19:06:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:24.336 19:06:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:24.336 19:06:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:24.336 19:06:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:24.336 19:06:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:24.336 19:06:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:24.336 19:06:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:24.336 19:06:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:24.336 19:06:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:24.336 19:06:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.336 19:06:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.336 19:06:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.336 19:06:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:24.336 "name": "raid_bdev1", 00:08:24.336 "uuid": "c00eac14-b192-441a-82c5-18ce9561313a", 00:08:24.336 "strip_size_kb": 64, 00:08:24.336 "state": "online", 00:08:24.336 "raid_level": "raid0", 00:08:24.336 "superblock": true, 00:08:24.336 "num_base_bdevs": 2, 00:08:24.336 "num_base_bdevs_discovered": 2, 00:08:24.336 "num_base_bdevs_operational": 2, 00:08:24.336 "base_bdevs_list": [ 00:08:24.336 { 00:08:24.336 "name": "BaseBdev1", 00:08:24.336 "uuid": "00e698be-11fc-5dcf-928f-7cd56e26be02", 00:08:24.336 "is_configured": true, 00:08:24.336 "data_offset": 2048, 00:08:24.336 "data_size": 63488 00:08:24.336 }, 00:08:24.336 { 00:08:24.336 "name": "BaseBdev2", 00:08:24.336 "uuid": "88be6c6e-88b8-5372-bf6b-d210997f804c", 00:08:24.336 "is_configured": true, 00:08:24.336 "data_offset": 2048, 00:08:24.336 "data_size": 63488 00:08:24.336 } 00:08:24.336 ] 00:08:24.336 }' 00:08:24.336 19:06:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:24.336 19:06:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.595 19:06:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:24.595 19:06:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.595 19:06:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.595 [2024-11-27 19:06:34.193430] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:24.595 [2024-11-27 19:06:34.193476] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:24.595 [2024-11-27 19:06:34.196208] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:24.595 [2024-11-27 19:06:34.196258] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:24.595 [2024-11-27 19:06:34.196293] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:24.595 [2024-11-27 19:06:34.196306] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:24.595 { 00:08:24.595 "results": [ 00:08:24.595 { 00:08:24.595 "job": "raid_bdev1", 00:08:24.595 "core_mask": "0x1", 00:08:24.595 "workload": "randrw", 00:08:24.595 "percentage": 50, 00:08:24.595 "status": "finished", 00:08:24.595 "queue_depth": 1, 00:08:24.595 "io_size": 131072, 00:08:24.595 "runtime": 1.373495, 00:08:24.595 "iops": 14096.156156374796, 00:08:24.595 "mibps": 1762.0195195468496, 00:08:24.595 "io_failed": 1, 00:08:24.595 "io_timeout": 0, 00:08:24.595 "avg_latency_us": 99.35532265288917, 00:08:24.595 "min_latency_us": 26.717903930131005, 00:08:24.595 "max_latency_us": 1423.7624454148472 00:08:24.595 } 00:08:24.595 ], 00:08:24.595 "core_count": 1 00:08:24.595 } 00:08:24.595 19:06:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.595 19:06:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61512 00:08:24.595 19:06:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 61512 ']' 00:08:24.595 19:06:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 61512 00:08:24.595 19:06:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:08:24.595 19:06:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:24.595 19:06:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61512 00:08:24.855 killing process with pid 61512 00:08:24.855 19:06:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:24.855 19:06:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:24.855 19:06:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61512' 00:08:24.855 19:06:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 61512 00:08:24.855 19:06:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 61512 00:08:24.855 [2024-11-27 19:06:34.239420] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:24.855 [2024-11-27 19:06:34.385645] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:26.247 19:06:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.qYOQC2LAgT 00:08:26.247 19:06:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:26.247 19:06:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:26.247 19:06:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:08:26.247 19:06:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:08:26.247 19:06:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:26.247 19:06:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:26.247 19:06:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:08:26.247 ************************************ 00:08:26.247 END TEST raid_read_error_test 00:08:26.247 ************************************ 00:08:26.247 00:08:26.247 real 0m4.419s 00:08:26.247 user 0m5.089s 00:08:26.247 sys 0m0.684s 00:08:26.247 19:06:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:26.247 19:06:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.247 19:06:35 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 2 write 00:08:26.247 19:06:35 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:26.247 19:06:35 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:26.247 19:06:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:26.247 ************************************ 00:08:26.247 START TEST raid_write_error_test 00:08:26.247 ************************************ 00:08:26.247 19:06:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 2 write 00:08:26.247 19:06:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:08:26.247 19:06:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:26.247 19:06:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:08:26.247 19:06:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:26.247 19:06:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:26.247 19:06:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:26.247 19:06:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:26.247 19:06:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:26.247 19:06:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:26.247 19:06:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:26.247 19:06:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:26.247 19:06:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:26.247 19:06:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:26.247 19:06:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:26.247 19:06:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:26.247 19:06:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:26.247 19:06:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:26.247 19:06:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:26.247 19:06:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:08:26.247 19:06:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:26.247 19:06:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:26.247 19:06:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:26.247 19:06:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.PzgvloFiqM 00:08:26.247 19:06:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61652 00:08:26.247 19:06:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:26.247 19:06:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61652 00:08:26.247 19:06:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 61652 ']' 00:08:26.247 19:06:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:26.247 19:06:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:26.247 19:06:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:26.247 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:26.247 19:06:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:26.247 19:06:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.247 [2024-11-27 19:06:35.859776] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:08:26.247 [2024-11-27 19:06:35.860029] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61652 ] 00:08:26.507 [2024-11-27 19:06:36.038206] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:26.765 [2024-11-27 19:06:36.181064] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:27.024 [2024-11-27 19:06:36.417517] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:27.024 [2024-11-27 19:06:36.417654] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:27.283 19:06:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:27.283 19:06:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:27.283 19:06:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:27.283 19:06:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:27.283 19:06:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.283 19:06:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.283 BaseBdev1_malloc 00:08:27.283 19:06:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.283 19:06:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:27.283 19:06:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.283 19:06:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.283 true 00:08:27.283 19:06:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.283 19:06:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:27.283 19:06:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.283 19:06:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.283 [2024-11-27 19:06:36.751339] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:27.283 [2024-11-27 19:06:36.751449] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:27.283 [2024-11-27 19:06:36.751475] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:27.283 [2024-11-27 19:06:36.751487] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:27.283 [2024-11-27 19:06:36.754021] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:27.284 [2024-11-27 19:06:36.754062] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:27.284 BaseBdev1 00:08:27.284 19:06:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.284 19:06:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:27.284 19:06:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:27.284 19:06:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.284 19:06:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.284 BaseBdev2_malloc 00:08:27.284 19:06:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.284 19:06:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:27.284 19:06:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.284 19:06:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.284 true 00:08:27.284 19:06:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.284 19:06:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:27.284 19:06:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.284 19:06:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.284 [2024-11-27 19:06:36.827027] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:27.284 [2024-11-27 19:06:36.827091] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:27.284 [2024-11-27 19:06:36.827109] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:27.284 [2024-11-27 19:06:36.827120] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:27.284 [2024-11-27 19:06:36.829549] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:27.284 [2024-11-27 19:06:36.829587] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:27.284 BaseBdev2 00:08:27.284 19:06:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.284 19:06:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:27.284 19:06:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.284 19:06:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.284 [2024-11-27 19:06:36.839071] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:27.284 [2024-11-27 19:06:36.841235] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:27.284 [2024-11-27 19:06:36.841433] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:27.284 [2024-11-27 19:06:36.841450] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:27.284 [2024-11-27 19:06:36.841684] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:27.284 [2024-11-27 19:06:36.841892] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:27.284 [2024-11-27 19:06:36.841905] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:27.284 [2024-11-27 19:06:36.842067] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:27.284 19:06:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.284 19:06:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:08:27.284 19:06:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:27.284 19:06:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:27.284 19:06:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:27.284 19:06:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:27.284 19:06:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:27.284 19:06:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:27.284 19:06:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:27.284 19:06:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:27.284 19:06:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:27.284 19:06:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:27.284 19:06:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:27.284 19:06:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.284 19:06:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.284 19:06:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.284 19:06:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:27.284 "name": "raid_bdev1", 00:08:27.284 "uuid": "50d61674-cc27-42be-8aff-07758b4859a4", 00:08:27.284 "strip_size_kb": 64, 00:08:27.284 "state": "online", 00:08:27.284 "raid_level": "raid0", 00:08:27.284 "superblock": true, 00:08:27.284 "num_base_bdevs": 2, 00:08:27.284 "num_base_bdevs_discovered": 2, 00:08:27.284 "num_base_bdevs_operational": 2, 00:08:27.284 "base_bdevs_list": [ 00:08:27.284 { 00:08:27.284 "name": "BaseBdev1", 00:08:27.284 "uuid": "c6899360-a3c1-5d30-afb7-3bf90df5b431", 00:08:27.284 "is_configured": true, 00:08:27.284 "data_offset": 2048, 00:08:27.284 "data_size": 63488 00:08:27.284 }, 00:08:27.284 { 00:08:27.284 "name": "BaseBdev2", 00:08:27.284 "uuid": "8cfd75d7-c349-5c50-b570-a554afb400d7", 00:08:27.284 "is_configured": true, 00:08:27.284 "data_offset": 2048, 00:08:27.284 "data_size": 63488 00:08:27.284 } 00:08:27.284 ] 00:08:27.284 }' 00:08:27.284 19:06:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:27.284 19:06:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.852 19:06:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:27.852 19:06:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:27.852 [2024-11-27 19:06:37.327759] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:08:28.790 19:06:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:08:28.790 19:06:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.790 19:06:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.790 19:06:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.790 19:06:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:28.790 19:06:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:08:28.790 19:06:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:08:28.790 19:06:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:08:28.790 19:06:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:28.790 19:06:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:28.790 19:06:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:28.790 19:06:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:28.790 19:06:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:28.790 19:06:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:28.790 19:06:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:28.790 19:06:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:28.790 19:06:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:28.790 19:06:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:28.790 19:06:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.790 19:06:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.790 19:06:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:28.790 19:06:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.790 19:06:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:28.790 "name": "raid_bdev1", 00:08:28.790 "uuid": "50d61674-cc27-42be-8aff-07758b4859a4", 00:08:28.790 "strip_size_kb": 64, 00:08:28.790 "state": "online", 00:08:28.790 "raid_level": "raid0", 00:08:28.790 "superblock": true, 00:08:28.790 "num_base_bdevs": 2, 00:08:28.790 "num_base_bdevs_discovered": 2, 00:08:28.790 "num_base_bdevs_operational": 2, 00:08:28.790 "base_bdevs_list": [ 00:08:28.790 { 00:08:28.790 "name": "BaseBdev1", 00:08:28.790 "uuid": "c6899360-a3c1-5d30-afb7-3bf90df5b431", 00:08:28.790 "is_configured": true, 00:08:28.790 "data_offset": 2048, 00:08:28.790 "data_size": 63488 00:08:28.790 }, 00:08:28.790 { 00:08:28.790 "name": "BaseBdev2", 00:08:28.790 "uuid": "8cfd75d7-c349-5c50-b570-a554afb400d7", 00:08:28.790 "is_configured": true, 00:08:28.790 "data_offset": 2048, 00:08:28.790 "data_size": 63488 00:08:28.790 } 00:08:28.790 ] 00:08:28.790 }' 00:08:28.790 19:06:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:28.790 19:06:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.358 19:06:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:29.358 19:06:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.358 19:06:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.358 [2024-11-27 19:06:38.700320] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:29.358 [2024-11-27 19:06:38.700371] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:29.358 [2024-11-27 19:06:38.702968] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:29.358 { 00:08:29.358 "results": [ 00:08:29.358 { 00:08:29.358 "job": "raid_bdev1", 00:08:29.358 "core_mask": "0x1", 00:08:29.358 "workload": "randrw", 00:08:29.358 "percentage": 50, 00:08:29.358 "status": "finished", 00:08:29.358 "queue_depth": 1, 00:08:29.358 "io_size": 131072, 00:08:29.358 "runtime": 1.373128, 00:08:29.358 "iops": 14394.142425178134, 00:08:29.358 "mibps": 1799.2678031472667, 00:08:29.358 "io_failed": 1, 00:08:29.358 "io_timeout": 0, 00:08:29.358 "avg_latency_us": 97.38920461097902, 00:08:29.358 "min_latency_us": 24.929257641921396, 00:08:29.358 "max_latency_us": 1359.3711790393013 00:08:29.358 } 00:08:29.358 ], 00:08:29.358 "core_count": 1 00:08:29.358 } 00:08:29.358 [2024-11-27 19:06:38.703106] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:29.358 [2024-11-27 19:06:38.703150] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:29.358 [2024-11-27 19:06:38.703163] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:29.358 19:06:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.358 19:06:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61652 00:08:29.358 19:06:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 61652 ']' 00:08:29.358 19:06:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 61652 00:08:29.358 19:06:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:08:29.358 19:06:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:29.358 19:06:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61652 00:08:29.358 killing process with pid 61652 00:08:29.358 19:06:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:29.358 19:06:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:29.358 19:06:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61652' 00:08:29.358 19:06:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 61652 00:08:29.358 [2024-11-27 19:06:38.739312] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:29.358 19:06:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 61652 00:08:29.358 [2024-11-27 19:06:38.886129] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:30.746 19:06:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.PzgvloFiqM 00:08:30.746 19:06:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:30.746 19:06:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:30.746 19:06:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:08:30.746 19:06:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:08:30.746 19:06:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:30.746 19:06:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:30.746 19:06:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:08:30.746 00:08:30.746 real 0m4.418s 00:08:30.746 user 0m5.108s 00:08:30.746 sys 0m0.636s 00:08:30.746 19:06:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:30.746 ************************************ 00:08:30.746 END TEST raid_write_error_test 00:08:30.746 ************************************ 00:08:30.746 19:06:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.746 19:06:40 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:30.746 19:06:40 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:08:30.746 19:06:40 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:30.746 19:06:40 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:30.746 19:06:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:30.746 ************************************ 00:08:30.746 START TEST raid_state_function_test 00:08:30.746 ************************************ 00:08:30.746 19:06:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 2 false 00:08:30.746 19:06:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:08:30.746 19:06:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:08:30.746 19:06:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:30.746 19:06:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:30.746 19:06:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:30.746 19:06:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:30.746 19:06:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:30.746 19:06:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:30.746 19:06:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:30.746 19:06:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:30.746 19:06:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:30.746 19:06:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:30.746 19:06:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:30.746 19:06:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:30.746 19:06:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:30.746 19:06:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:30.746 19:06:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:30.746 19:06:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:30.746 19:06:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:08:30.746 19:06:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:30.746 19:06:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:30.746 19:06:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:30.746 19:06:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:30.746 19:06:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=61790 00:08:30.746 19:06:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:30.746 19:06:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61790' 00:08:30.746 Process raid pid: 61790 00:08:30.746 19:06:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 61790 00:08:30.746 19:06:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 61790 ']' 00:08:30.746 19:06:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:30.746 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:30.746 19:06:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:30.746 19:06:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:30.747 19:06:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:30.747 19:06:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.747 [2024-11-27 19:06:40.331766] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:08:30.747 [2024-11-27 19:06:40.331906] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:31.006 [2024-11-27 19:06:40.505023] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:31.266 [2024-11-27 19:06:40.642214] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:31.266 [2024-11-27 19:06:40.875014] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:31.266 [2024-11-27 19:06:40.875069] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:31.835 19:06:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:31.835 19:06:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:08:31.835 19:06:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:31.836 19:06:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.836 19:06:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.836 [2024-11-27 19:06:41.176337] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:31.836 [2024-11-27 19:06:41.176408] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:31.836 [2024-11-27 19:06:41.176419] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:31.836 [2024-11-27 19:06:41.176429] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:31.836 19:06:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.836 19:06:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:08:31.836 19:06:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:31.836 19:06:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:31.836 19:06:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:31.836 19:06:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:31.836 19:06:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:31.836 19:06:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:31.836 19:06:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:31.836 19:06:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:31.836 19:06:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:31.836 19:06:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:31.836 19:06:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:31.836 19:06:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.836 19:06:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.836 19:06:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.836 19:06:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:31.836 "name": "Existed_Raid", 00:08:31.836 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:31.836 "strip_size_kb": 64, 00:08:31.836 "state": "configuring", 00:08:31.836 "raid_level": "concat", 00:08:31.836 "superblock": false, 00:08:31.836 "num_base_bdevs": 2, 00:08:31.836 "num_base_bdevs_discovered": 0, 00:08:31.836 "num_base_bdevs_operational": 2, 00:08:31.836 "base_bdevs_list": [ 00:08:31.836 { 00:08:31.836 "name": "BaseBdev1", 00:08:31.836 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:31.836 "is_configured": false, 00:08:31.836 "data_offset": 0, 00:08:31.836 "data_size": 0 00:08:31.836 }, 00:08:31.836 { 00:08:31.836 "name": "BaseBdev2", 00:08:31.836 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:31.836 "is_configured": false, 00:08:31.836 "data_offset": 0, 00:08:31.836 "data_size": 0 00:08:31.836 } 00:08:31.836 ] 00:08:31.836 }' 00:08:31.836 19:06:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:31.836 19:06:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.096 19:06:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:32.096 19:06:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.096 19:06:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.096 [2024-11-27 19:06:41.619569] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:32.096 [2024-11-27 19:06:41.619702] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:32.096 19:06:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.096 19:06:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:32.096 19:06:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.096 19:06:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.096 [2024-11-27 19:06:41.631497] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:32.096 [2024-11-27 19:06:41.631586] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:32.096 [2024-11-27 19:06:41.631616] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:32.096 [2024-11-27 19:06:41.631643] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:32.096 19:06:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.096 19:06:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:32.096 19:06:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.096 19:06:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.096 [2024-11-27 19:06:41.685308] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:32.096 BaseBdev1 00:08:32.096 19:06:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.096 19:06:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:32.096 19:06:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:32.096 19:06:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:32.096 19:06:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:32.096 19:06:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:32.096 19:06:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:32.096 19:06:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:32.096 19:06:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.096 19:06:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.096 19:06:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.096 19:06:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:32.096 19:06:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.096 19:06:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.096 [ 00:08:32.096 { 00:08:32.096 "name": "BaseBdev1", 00:08:32.096 "aliases": [ 00:08:32.096 "9658327e-5752-499b-abd1-acda19dc2bef" 00:08:32.096 ], 00:08:32.096 "product_name": "Malloc disk", 00:08:32.096 "block_size": 512, 00:08:32.096 "num_blocks": 65536, 00:08:32.096 "uuid": "9658327e-5752-499b-abd1-acda19dc2bef", 00:08:32.096 "assigned_rate_limits": { 00:08:32.096 "rw_ios_per_sec": 0, 00:08:32.096 "rw_mbytes_per_sec": 0, 00:08:32.096 "r_mbytes_per_sec": 0, 00:08:32.096 "w_mbytes_per_sec": 0 00:08:32.096 }, 00:08:32.096 "claimed": true, 00:08:32.096 "claim_type": "exclusive_write", 00:08:32.096 "zoned": false, 00:08:32.096 "supported_io_types": { 00:08:32.096 "read": true, 00:08:32.096 "write": true, 00:08:32.096 "unmap": true, 00:08:32.096 "flush": true, 00:08:32.096 "reset": true, 00:08:32.096 "nvme_admin": false, 00:08:32.096 "nvme_io": false, 00:08:32.096 "nvme_io_md": false, 00:08:32.096 "write_zeroes": true, 00:08:32.096 "zcopy": true, 00:08:32.096 "get_zone_info": false, 00:08:32.096 "zone_management": false, 00:08:32.096 "zone_append": false, 00:08:32.096 "compare": false, 00:08:32.096 "compare_and_write": false, 00:08:32.096 "abort": true, 00:08:32.096 "seek_hole": false, 00:08:32.096 "seek_data": false, 00:08:32.096 "copy": true, 00:08:32.096 "nvme_iov_md": false 00:08:32.096 }, 00:08:32.096 "memory_domains": [ 00:08:32.096 { 00:08:32.096 "dma_device_id": "system", 00:08:32.096 "dma_device_type": 1 00:08:32.096 }, 00:08:32.096 { 00:08:32.096 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:32.096 "dma_device_type": 2 00:08:32.096 } 00:08:32.096 ], 00:08:32.096 "driver_specific": {} 00:08:32.096 } 00:08:32.096 ] 00:08:32.096 19:06:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.096 19:06:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:32.096 19:06:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:08:32.096 19:06:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:32.096 19:06:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:32.096 19:06:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:32.096 19:06:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:32.097 19:06:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:32.097 19:06:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:32.097 19:06:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:32.097 19:06:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:32.097 19:06:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:32.097 19:06:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:32.097 19:06:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:32.097 19:06:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.097 19:06:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.356 19:06:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.356 19:06:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:32.356 "name": "Existed_Raid", 00:08:32.356 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:32.356 "strip_size_kb": 64, 00:08:32.356 "state": "configuring", 00:08:32.356 "raid_level": "concat", 00:08:32.356 "superblock": false, 00:08:32.356 "num_base_bdevs": 2, 00:08:32.356 "num_base_bdevs_discovered": 1, 00:08:32.356 "num_base_bdevs_operational": 2, 00:08:32.356 "base_bdevs_list": [ 00:08:32.356 { 00:08:32.356 "name": "BaseBdev1", 00:08:32.356 "uuid": "9658327e-5752-499b-abd1-acda19dc2bef", 00:08:32.356 "is_configured": true, 00:08:32.356 "data_offset": 0, 00:08:32.356 "data_size": 65536 00:08:32.356 }, 00:08:32.356 { 00:08:32.356 "name": "BaseBdev2", 00:08:32.356 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:32.356 "is_configured": false, 00:08:32.356 "data_offset": 0, 00:08:32.356 "data_size": 0 00:08:32.356 } 00:08:32.356 ] 00:08:32.356 }' 00:08:32.356 19:06:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:32.356 19:06:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.617 19:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:32.617 19:06:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.617 19:06:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.617 [2024-11-27 19:06:42.100662] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:32.617 [2024-11-27 19:06:42.100825] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:32.617 19:06:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.617 19:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:32.617 19:06:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.617 19:06:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.617 [2024-11-27 19:06:42.108693] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:32.617 [2024-11-27 19:06:42.110906] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:32.617 [2024-11-27 19:06:42.111006] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:32.617 19:06:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.617 19:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:32.617 19:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:32.617 19:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:08:32.617 19:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:32.617 19:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:32.617 19:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:32.617 19:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:32.617 19:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:32.617 19:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:32.617 19:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:32.617 19:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:32.617 19:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:32.617 19:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:32.617 19:06:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.617 19:06:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.617 19:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:32.617 19:06:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.617 19:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:32.617 "name": "Existed_Raid", 00:08:32.617 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:32.617 "strip_size_kb": 64, 00:08:32.617 "state": "configuring", 00:08:32.617 "raid_level": "concat", 00:08:32.617 "superblock": false, 00:08:32.617 "num_base_bdevs": 2, 00:08:32.617 "num_base_bdevs_discovered": 1, 00:08:32.617 "num_base_bdevs_operational": 2, 00:08:32.617 "base_bdevs_list": [ 00:08:32.617 { 00:08:32.617 "name": "BaseBdev1", 00:08:32.617 "uuid": "9658327e-5752-499b-abd1-acda19dc2bef", 00:08:32.617 "is_configured": true, 00:08:32.617 "data_offset": 0, 00:08:32.617 "data_size": 65536 00:08:32.617 }, 00:08:32.617 { 00:08:32.617 "name": "BaseBdev2", 00:08:32.617 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:32.617 "is_configured": false, 00:08:32.617 "data_offset": 0, 00:08:32.617 "data_size": 0 00:08:32.617 } 00:08:32.617 ] 00:08:32.617 }' 00:08:32.617 19:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:32.617 19:06:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.877 19:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:32.877 19:06:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.877 19:06:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.138 [2024-11-27 19:06:42.548585] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:33.138 [2024-11-27 19:06:42.548649] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:33.138 [2024-11-27 19:06:42.548657] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:08:33.138 [2024-11-27 19:06:42.548986] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:33.138 [2024-11-27 19:06:42.549187] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:33.138 [2024-11-27 19:06:42.549208] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:33.138 [2024-11-27 19:06:42.549522] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:33.138 BaseBdev2 00:08:33.138 19:06:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.138 19:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:33.138 19:06:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:33.138 19:06:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:33.138 19:06:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:33.138 19:06:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:33.138 19:06:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:33.138 19:06:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:33.138 19:06:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.138 19:06:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.138 19:06:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.138 19:06:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:33.138 19:06:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.138 19:06:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.138 [ 00:08:33.138 { 00:08:33.138 "name": "BaseBdev2", 00:08:33.138 "aliases": [ 00:08:33.138 "37468442-015d-4eb4-a923-e6f281dd1ebe" 00:08:33.138 ], 00:08:33.138 "product_name": "Malloc disk", 00:08:33.138 "block_size": 512, 00:08:33.138 "num_blocks": 65536, 00:08:33.138 "uuid": "37468442-015d-4eb4-a923-e6f281dd1ebe", 00:08:33.138 "assigned_rate_limits": { 00:08:33.138 "rw_ios_per_sec": 0, 00:08:33.138 "rw_mbytes_per_sec": 0, 00:08:33.138 "r_mbytes_per_sec": 0, 00:08:33.138 "w_mbytes_per_sec": 0 00:08:33.138 }, 00:08:33.138 "claimed": true, 00:08:33.138 "claim_type": "exclusive_write", 00:08:33.138 "zoned": false, 00:08:33.138 "supported_io_types": { 00:08:33.138 "read": true, 00:08:33.138 "write": true, 00:08:33.138 "unmap": true, 00:08:33.138 "flush": true, 00:08:33.138 "reset": true, 00:08:33.138 "nvme_admin": false, 00:08:33.138 "nvme_io": false, 00:08:33.138 "nvme_io_md": false, 00:08:33.138 "write_zeroes": true, 00:08:33.138 "zcopy": true, 00:08:33.138 "get_zone_info": false, 00:08:33.138 "zone_management": false, 00:08:33.138 "zone_append": false, 00:08:33.138 "compare": false, 00:08:33.138 "compare_and_write": false, 00:08:33.138 "abort": true, 00:08:33.138 "seek_hole": false, 00:08:33.138 "seek_data": false, 00:08:33.138 "copy": true, 00:08:33.138 "nvme_iov_md": false 00:08:33.138 }, 00:08:33.138 "memory_domains": [ 00:08:33.138 { 00:08:33.138 "dma_device_id": "system", 00:08:33.138 "dma_device_type": 1 00:08:33.138 }, 00:08:33.138 { 00:08:33.138 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:33.138 "dma_device_type": 2 00:08:33.138 } 00:08:33.138 ], 00:08:33.138 "driver_specific": {} 00:08:33.138 } 00:08:33.138 ] 00:08:33.138 19:06:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.138 19:06:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:33.138 19:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:33.138 19:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:33.138 19:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:08:33.138 19:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:33.138 19:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:33.138 19:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:33.138 19:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:33.138 19:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:33.138 19:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:33.138 19:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:33.138 19:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:33.138 19:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:33.138 19:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:33.138 19:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:33.138 19:06:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.138 19:06:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.138 19:06:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.139 19:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:33.139 "name": "Existed_Raid", 00:08:33.139 "uuid": "a183a863-cbe0-445f-9d5c-6f050d51a3ba", 00:08:33.139 "strip_size_kb": 64, 00:08:33.139 "state": "online", 00:08:33.139 "raid_level": "concat", 00:08:33.139 "superblock": false, 00:08:33.139 "num_base_bdevs": 2, 00:08:33.139 "num_base_bdevs_discovered": 2, 00:08:33.139 "num_base_bdevs_operational": 2, 00:08:33.139 "base_bdevs_list": [ 00:08:33.139 { 00:08:33.139 "name": "BaseBdev1", 00:08:33.139 "uuid": "9658327e-5752-499b-abd1-acda19dc2bef", 00:08:33.139 "is_configured": true, 00:08:33.139 "data_offset": 0, 00:08:33.139 "data_size": 65536 00:08:33.139 }, 00:08:33.139 { 00:08:33.139 "name": "BaseBdev2", 00:08:33.139 "uuid": "37468442-015d-4eb4-a923-e6f281dd1ebe", 00:08:33.139 "is_configured": true, 00:08:33.139 "data_offset": 0, 00:08:33.139 "data_size": 65536 00:08:33.139 } 00:08:33.139 ] 00:08:33.139 }' 00:08:33.139 19:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:33.139 19:06:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.399 19:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:33.399 19:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:33.399 19:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:33.399 19:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:33.399 19:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:33.399 19:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:33.399 19:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:33.399 19:06:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.399 19:06:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.399 19:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:33.658 [2024-11-27 19:06:43.036105] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:33.658 19:06:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.658 19:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:33.658 "name": "Existed_Raid", 00:08:33.658 "aliases": [ 00:08:33.658 "a183a863-cbe0-445f-9d5c-6f050d51a3ba" 00:08:33.658 ], 00:08:33.658 "product_name": "Raid Volume", 00:08:33.658 "block_size": 512, 00:08:33.658 "num_blocks": 131072, 00:08:33.658 "uuid": "a183a863-cbe0-445f-9d5c-6f050d51a3ba", 00:08:33.658 "assigned_rate_limits": { 00:08:33.658 "rw_ios_per_sec": 0, 00:08:33.658 "rw_mbytes_per_sec": 0, 00:08:33.658 "r_mbytes_per_sec": 0, 00:08:33.658 "w_mbytes_per_sec": 0 00:08:33.658 }, 00:08:33.658 "claimed": false, 00:08:33.658 "zoned": false, 00:08:33.658 "supported_io_types": { 00:08:33.658 "read": true, 00:08:33.658 "write": true, 00:08:33.658 "unmap": true, 00:08:33.658 "flush": true, 00:08:33.658 "reset": true, 00:08:33.658 "nvme_admin": false, 00:08:33.658 "nvme_io": false, 00:08:33.659 "nvme_io_md": false, 00:08:33.659 "write_zeroes": true, 00:08:33.659 "zcopy": false, 00:08:33.659 "get_zone_info": false, 00:08:33.659 "zone_management": false, 00:08:33.659 "zone_append": false, 00:08:33.659 "compare": false, 00:08:33.659 "compare_and_write": false, 00:08:33.659 "abort": false, 00:08:33.659 "seek_hole": false, 00:08:33.659 "seek_data": false, 00:08:33.659 "copy": false, 00:08:33.659 "nvme_iov_md": false 00:08:33.659 }, 00:08:33.659 "memory_domains": [ 00:08:33.659 { 00:08:33.659 "dma_device_id": "system", 00:08:33.659 "dma_device_type": 1 00:08:33.659 }, 00:08:33.659 { 00:08:33.659 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:33.659 "dma_device_type": 2 00:08:33.659 }, 00:08:33.659 { 00:08:33.659 "dma_device_id": "system", 00:08:33.659 "dma_device_type": 1 00:08:33.659 }, 00:08:33.659 { 00:08:33.659 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:33.659 "dma_device_type": 2 00:08:33.659 } 00:08:33.659 ], 00:08:33.659 "driver_specific": { 00:08:33.659 "raid": { 00:08:33.659 "uuid": "a183a863-cbe0-445f-9d5c-6f050d51a3ba", 00:08:33.659 "strip_size_kb": 64, 00:08:33.659 "state": "online", 00:08:33.659 "raid_level": "concat", 00:08:33.659 "superblock": false, 00:08:33.659 "num_base_bdevs": 2, 00:08:33.659 "num_base_bdevs_discovered": 2, 00:08:33.659 "num_base_bdevs_operational": 2, 00:08:33.659 "base_bdevs_list": [ 00:08:33.659 { 00:08:33.659 "name": "BaseBdev1", 00:08:33.659 "uuid": "9658327e-5752-499b-abd1-acda19dc2bef", 00:08:33.659 "is_configured": true, 00:08:33.659 "data_offset": 0, 00:08:33.659 "data_size": 65536 00:08:33.659 }, 00:08:33.659 { 00:08:33.659 "name": "BaseBdev2", 00:08:33.659 "uuid": "37468442-015d-4eb4-a923-e6f281dd1ebe", 00:08:33.659 "is_configured": true, 00:08:33.659 "data_offset": 0, 00:08:33.659 "data_size": 65536 00:08:33.659 } 00:08:33.659 ] 00:08:33.659 } 00:08:33.659 } 00:08:33.659 }' 00:08:33.659 19:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:33.659 19:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:33.659 BaseBdev2' 00:08:33.659 19:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:33.659 19:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:33.659 19:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:33.659 19:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:33.659 19:06:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.659 19:06:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.659 19:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:33.659 19:06:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.659 19:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:33.659 19:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:33.659 19:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:33.659 19:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:33.659 19:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:33.659 19:06:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.659 19:06:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.659 19:06:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.659 19:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:33.659 19:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:33.659 19:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:33.659 19:06:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.659 19:06:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.659 [2024-11-27 19:06:43.263503] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:33.659 [2024-11-27 19:06:43.263608] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:33.659 [2024-11-27 19:06:43.263714] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:33.918 19:06:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.918 19:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:33.918 19:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:08:33.918 19:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:33.918 19:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:33.918 19:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:33.918 19:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:08:33.918 19:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:33.918 19:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:33.918 19:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:33.918 19:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:33.918 19:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:33.918 19:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:33.918 19:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:33.918 19:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:33.918 19:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:33.918 19:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:33.918 19:06:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.918 19:06:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.918 19:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:33.918 19:06:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.918 19:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:33.918 "name": "Existed_Raid", 00:08:33.918 "uuid": "a183a863-cbe0-445f-9d5c-6f050d51a3ba", 00:08:33.918 "strip_size_kb": 64, 00:08:33.918 "state": "offline", 00:08:33.918 "raid_level": "concat", 00:08:33.918 "superblock": false, 00:08:33.918 "num_base_bdevs": 2, 00:08:33.918 "num_base_bdevs_discovered": 1, 00:08:33.918 "num_base_bdevs_operational": 1, 00:08:33.918 "base_bdevs_list": [ 00:08:33.918 { 00:08:33.918 "name": null, 00:08:33.918 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:33.918 "is_configured": false, 00:08:33.918 "data_offset": 0, 00:08:33.918 "data_size": 65536 00:08:33.918 }, 00:08:33.918 { 00:08:33.918 "name": "BaseBdev2", 00:08:33.918 "uuid": "37468442-015d-4eb4-a923-e6f281dd1ebe", 00:08:33.918 "is_configured": true, 00:08:33.918 "data_offset": 0, 00:08:33.918 "data_size": 65536 00:08:33.918 } 00:08:33.918 ] 00:08:33.918 }' 00:08:33.918 19:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:33.918 19:06:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.177 19:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:34.177 19:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:34.177 19:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:34.177 19:06:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.177 19:06:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.177 19:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:34.438 19:06:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.438 19:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:34.438 19:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:34.438 19:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:34.438 19:06:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.438 19:06:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.438 [2024-11-27 19:06:43.847445] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:34.438 [2024-11-27 19:06:43.847511] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:34.438 19:06:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.438 19:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:34.438 19:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:34.438 19:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:34.438 19:06:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.438 19:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:34.438 19:06:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.438 19:06:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.438 19:06:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:34.438 19:06:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:34.438 19:06:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:08:34.438 19:06:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 61790 00:08:34.438 19:06:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 61790 ']' 00:08:34.438 19:06:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 61790 00:08:34.438 19:06:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:08:34.438 19:06:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:34.438 19:06:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61790 00:08:34.438 killing process with pid 61790 00:08:34.438 19:06:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:34.438 19:06:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:34.438 19:06:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61790' 00:08:34.438 19:06:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 61790 00:08:34.438 19:06:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 61790 00:08:34.438 [2024-11-27 19:06:44.042997] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:34.438 [2024-11-27 19:06:44.060253] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:35.885 19:06:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:08:35.885 ************************************ 00:08:35.885 END TEST raid_state_function_test 00:08:35.885 ************************************ 00:08:35.885 00:08:35.885 real 0m5.046s 00:08:35.885 user 0m7.106s 00:08:35.885 sys 0m0.852s 00:08:35.885 19:06:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:35.885 19:06:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.885 19:06:45 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:08:35.885 19:06:45 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:35.885 19:06:45 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:35.885 19:06:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:35.885 ************************************ 00:08:35.885 START TEST raid_state_function_test_sb 00:08:35.885 ************************************ 00:08:35.885 19:06:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 2 true 00:08:35.885 19:06:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:08:35.885 19:06:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:08:35.885 19:06:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:08:35.885 19:06:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:35.885 19:06:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:35.885 19:06:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:35.885 19:06:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:35.885 19:06:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:35.885 19:06:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:35.885 19:06:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:35.885 19:06:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:35.885 19:06:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:35.885 19:06:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:35.885 19:06:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:35.885 19:06:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:35.885 19:06:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:35.885 19:06:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:35.885 19:06:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:35.885 19:06:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:08:35.885 19:06:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:35.885 19:06:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:35.885 19:06:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:08:35.885 19:06:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:08:35.885 Process raid pid: 62043 00:08:35.885 19:06:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=62043 00:08:35.885 19:06:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:35.885 19:06:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62043' 00:08:35.885 19:06:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 62043 00:08:35.885 19:06:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 62043 ']' 00:08:35.885 19:06:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:35.885 19:06:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:35.885 19:06:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:35.885 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:35.885 19:06:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:35.885 19:06:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.885 [2024-11-27 19:06:45.425257] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:08:35.885 [2024-11-27 19:06:45.425414] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:36.146 [2024-11-27 19:06:45.597543] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:36.146 [2024-11-27 19:06:45.738451] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:36.405 [2024-11-27 19:06:45.975072] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:36.405 [2024-11-27 19:06:45.975226] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:36.666 19:06:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:36.666 19:06:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:08:36.666 19:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:36.666 19:06:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.666 19:06:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:36.666 [2024-11-27 19:06:46.289404] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:36.666 [2024-11-27 19:06:46.289546] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:36.666 [2024-11-27 19:06:46.289584] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:36.666 [2024-11-27 19:06:46.289610] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:36.666 19:06:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.666 19:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:08:36.666 19:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:36.666 19:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:36.666 19:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:36.666 19:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:36.666 19:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:36.666 19:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:36.666 19:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:36.666 19:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:36.666 19:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:36.666 19:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:36.666 19:06:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.666 19:06:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:36.926 19:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:36.926 19:06:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.926 19:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:36.926 "name": "Existed_Raid", 00:08:36.926 "uuid": "3d1d8bc9-bddd-49be-b70e-1caaa925c9e7", 00:08:36.926 "strip_size_kb": 64, 00:08:36.926 "state": "configuring", 00:08:36.926 "raid_level": "concat", 00:08:36.926 "superblock": true, 00:08:36.926 "num_base_bdevs": 2, 00:08:36.926 "num_base_bdevs_discovered": 0, 00:08:36.926 "num_base_bdevs_operational": 2, 00:08:36.926 "base_bdevs_list": [ 00:08:36.926 { 00:08:36.926 "name": "BaseBdev1", 00:08:36.926 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:36.926 "is_configured": false, 00:08:36.926 "data_offset": 0, 00:08:36.926 "data_size": 0 00:08:36.926 }, 00:08:36.926 { 00:08:36.926 "name": "BaseBdev2", 00:08:36.926 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:36.926 "is_configured": false, 00:08:36.926 "data_offset": 0, 00:08:36.926 "data_size": 0 00:08:36.926 } 00:08:36.926 ] 00:08:36.926 }' 00:08:36.926 19:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:36.926 19:06:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.186 19:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:37.186 19:06:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.186 19:06:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.186 [2024-11-27 19:06:46.744675] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:37.186 [2024-11-27 19:06:46.744792] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:37.186 19:06:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.186 19:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:37.186 19:06:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.186 19:06:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.186 [2024-11-27 19:06:46.752635] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:37.186 [2024-11-27 19:06:46.752748] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:37.186 [2024-11-27 19:06:46.752780] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:37.186 [2024-11-27 19:06:46.752808] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:37.186 19:06:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.186 19:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:37.186 19:06:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.186 19:06:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.186 [2024-11-27 19:06:46.801571] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:37.186 BaseBdev1 00:08:37.186 19:06:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.186 19:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:37.186 19:06:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:37.186 19:06:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:37.186 19:06:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:37.186 19:06:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:37.186 19:06:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:37.186 19:06:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:37.186 19:06:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.186 19:06:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.186 19:06:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.186 19:06:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:37.186 19:06:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.186 19:06:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.445 [ 00:08:37.445 { 00:08:37.445 "name": "BaseBdev1", 00:08:37.445 "aliases": [ 00:08:37.445 "b6f49bcd-70d0-4504-8f43-c87b48f2f4e4" 00:08:37.445 ], 00:08:37.445 "product_name": "Malloc disk", 00:08:37.445 "block_size": 512, 00:08:37.445 "num_blocks": 65536, 00:08:37.445 "uuid": "b6f49bcd-70d0-4504-8f43-c87b48f2f4e4", 00:08:37.445 "assigned_rate_limits": { 00:08:37.445 "rw_ios_per_sec": 0, 00:08:37.445 "rw_mbytes_per_sec": 0, 00:08:37.445 "r_mbytes_per_sec": 0, 00:08:37.445 "w_mbytes_per_sec": 0 00:08:37.445 }, 00:08:37.445 "claimed": true, 00:08:37.445 "claim_type": "exclusive_write", 00:08:37.445 "zoned": false, 00:08:37.445 "supported_io_types": { 00:08:37.445 "read": true, 00:08:37.445 "write": true, 00:08:37.445 "unmap": true, 00:08:37.445 "flush": true, 00:08:37.445 "reset": true, 00:08:37.445 "nvme_admin": false, 00:08:37.445 "nvme_io": false, 00:08:37.445 "nvme_io_md": false, 00:08:37.445 "write_zeroes": true, 00:08:37.445 "zcopy": true, 00:08:37.445 "get_zone_info": false, 00:08:37.445 "zone_management": false, 00:08:37.445 "zone_append": false, 00:08:37.445 "compare": false, 00:08:37.445 "compare_and_write": false, 00:08:37.445 "abort": true, 00:08:37.445 "seek_hole": false, 00:08:37.445 "seek_data": false, 00:08:37.445 "copy": true, 00:08:37.445 "nvme_iov_md": false 00:08:37.445 }, 00:08:37.445 "memory_domains": [ 00:08:37.445 { 00:08:37.445 "dma_device_id": "system", 00:08:37.445 "dma_device_type": 1 00:08:37.445 }, 00:08:37.445 { 00:08:37.445 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:37.445 "dma_device_type": 2 00:08:37.445 } 00:08:37.445 ], 00:08:37.445 "driver_specific": {} 00:08:37.445 } 00:08:37.445 ] 00:08:37.445 19:06:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.445 19:06:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:37.445 19:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:08:37.445 19:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:37.445 19:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:37.445 19:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:37.445 19:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:37.445 19:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:37.445 19:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:37.445 19:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:37.445 19:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:37.445 19:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:37.445 19:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:37.445 19:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:37.445 19:06:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.445 19:06:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.445 19:06:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.445 19:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:37.445 "name": "Existed_Raid", 00:08:37.445 "uuid": "19eb2e29-0a44-46fd-a362-37b52fe43f58", 00:08:37.445 "strip_size_kb": 64, 00:08:37.445 "state": "configuring", 00:08:37.445 "raid_level": "concat", 00:08:37.445 "superblock": true, 00:08:37.445 "num_base_bdevs": 2, 00:08:37.445 "num_base_bdevs_discovered": 1, 00:08:37.445 "num_base_bdevs_operational": 2, 00:08:37.445 "base_bdevs_list": [ 00:08:37.445 { 00:08:37.445 "name": "BaseBdev1", 00:08:37.445 "uuid": "b6f49bcd-70d0-4504-8f43-c87b48f2f4e4", 00:08:37.445 "is_configured": true, 00:08:37.445 "data_offset": 2048, 00:08:37.445 "data_size": 63488 00:08:37.445 }, 00:08:37.445 { 00:08:37.445 "name": "BaseBdev2", 00:08:37.445 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:37.445 "is_configured": false, 00:08:37.445 "data_offset": 0, 00:08:37.445 "data_size": 0 00:08:37.445 } 00:08:37.445 ] 00:08:37.445 }' 00:08:37.445 19:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:37.445 19:06:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.704 19:06:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:37.704 19:06:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.704 19:06:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.704 [2024-11-27 19:06:47.308782] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:37.704 [2024-11-27 19:06:47.308953] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:37.704 19:06:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.704 19:06:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:37.704 19:06:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.704 19:06:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.704 [2024-11-27 19:06:47.320769] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:37.704 [2024-11-27 19:06:47.322901] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:37.704 [2024-11-27 19:06:47.322942] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:37.704 19:06:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.704 19:06:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:37.704 19:06:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:37.704 19:06:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:08:37.704 19:06:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:37.704 19:06:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:37.704 19:06:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:37.704 19:06:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:37.704 19:06:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:37.704 19:06:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:37.704 19:06:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:37.704 19:06:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:37.704 19:06:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:37.704 19:06:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:37.704 19:06:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:37.704 19:06:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.704 19:06:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.963 19:06:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.963 19:06:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:37.963 "name": "Existed_Raid", 00:08:37.963 "uuid": "e7d24197-2557-4a49-9153-51950753c060", 00:08:37.963 "strip_size_kb": 64, 00:08:37.963 "state": "configuring", 00:08:37.963 "raid_level": "concat", 00:08:37.963 "superblock": true, 00:08:37.963 "num_base_bdevs": 2, 00:08:37.963 "num_base_bdevs_discovered": 1, 00:08:37.963 "num_base_bdevs_operational": 2, 00:08:37.963 "base_bdevs_list": [ 00:08:37.963 { 00:08:37.963 "name": "BaseBdev1", 00:08:37.963 "uuid": "b6f49bcd-70d0-4504-8f43-c87b48f2f4e4", 00:08:37.964 "is_configured": true, 00:08:37.964 "data_offset": 2048, 00:08:37.964 "data_size": 63488 00:08:37.964 }, 00:08:37.964 { 00:08:37.964 "name": "BaseBdev2", 00:08:37.964 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:37.964 "is_configured": false, 00:08:37.964 "data_offset": 0, 00:08:37.964 "data_size": 0 00:08:37.964 } 00:08:37.964 ] 00:08:37.964 }' 00:08:37.964 19:06:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:37.964 19:06:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.224 19:06:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:38.224 19:06:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.224 19:06:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.224 [2024-11-27 19:06:47.824905] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:38.224 [2024-11-27 19:06:47.825356] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:38.224 [2024-11-27 19:06:47.825415] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:38.224 [2024-11-27 19:06:47.825748] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:38.224 BaseBdev2 00:08:38.224 [2024-11-27 19:06:47.825966] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:38.224 [2024-11-27 19:06:47.825984] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:38.224 [2024-11-27 19:06:47.826137] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:38.224 19:06:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.224 19:06:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:38.224 19:06:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:38.224 19:06:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:38.224 19:06:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:38.224 19:06:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:38.224 19:06:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:38.224 19:06:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:38.224 19:06:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.224 19:06:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.224 19:06:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.224 19:06:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:38.224 19:06:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.224 19:06:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.224 [ 00:08:38.224 { 00:08:38.224 "name": "BaseBdev2", 00:08:38.224 "aliases": [ 00:08:38.224 "56776845-359f-4589-b04c-4ad8a32a05a5" 00:08:38.224 ], 00:08:38.224 "product_name": "Malloc disk", 00:08:38.224 "block_size": 512, 00:08:38.224 "num_blocks": 65536, 00:08:38.224 "uuid": "56776845-359f-4589-b04c-4ad8a32a05a5", 00:08:38.224 "assigned_rate_limits": { 00:08:38.224 "rw_ios_per_sec": 0, 00:08:38.224 "rw_mbytes_per_sec": 0, 00:08:38.224 "r_mbytes_per_sec": 0, 00:08:38.224 "w_mbytes_per_sec": 0 00:08:38.224 }, 00:08:38.224 "claimed": true, 00:08:38.224 "claim_type": "exclusive_write", 00:08:38.224 "zoned": false, 00:08:38.224 "supported_io_types": { 00:08:38.224 "read": true, 00:08:38.224 "write": true, 00:08:38.224 "unmap": true, 00:08:38.224 "flush": true, 00:08:38.224 "reset": true, 00:08:38.224 "nvme_admin": false, 00:08:38.224 "nvme_io": false, 00:08:38.224 "nvme_io_md": false, 00:08:38.224 "write_zeroes": true, 00:08:38.224 "zcopy": true, 00:08:38.224 "get_zone_info": false, 00:08:38.224 "zone_management": false, 00:08:38.224 "zone_append": false, 00:08:38.224 "compare": false, 00:08:38.224 "compare_and_write": false, 00:08:38.224 "abort": true, 00:08:38.224 "seek_hole": false, 00:08:38.224 "seek_data": false, 00:08:38.224 "copy": true, 00:08:38.483 "nvme_iov_md": false 00:08:38.483 }, 00:08:38.483 "memory_domains": [ 00:08:38.483 { 00:08:38.483 "dma_device_id": "system", 00:08:38.483 "dma_device_type": 1 00:08:38.483 }, 00:08:38.483 { 00:08:38.483 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:38.483 "dma_device_type": 2 00:08:38.483 } 00:08:38.483 ], 00:08:38.483 "driver_specific": {} 00:08:38.483 } 00:08:38.483 ] 00:08:38.483 19:06:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.483 19:06:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:38.483 19:06:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:38.483 19:06:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:38.483 19:06:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:08:38.483 19:06:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:38.483 19:06:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:38.483 19:06:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:38.483 19:06:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:38.483 19:06:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:38.483 19:06:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:38.483 19:06:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:38.483 19:06:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:38.483 19:06:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:38.483 19:06:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:38.483 19:06:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.483 19:06:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.483 19:06:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:38.483 19:06:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.483 19:06:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:38.483 "name": "Existed_Raid", 00:08:38.483 "uuid": "e7d24197-2557-4a49-9153-51950753c060", 00:08:38.483 "strip_size_kb": 64, 00:08:38.483 "state": "online", 00:08:38.483 "raid_level": "concat", 00:08:38.483 "superblock": true, 00:08:38.483 "num_base_bdevs": 2, 00:08:38.483 "num_base_bdevs_discovered": 2, 00:08:38.483 "num_base_bdevs_operational": 2, 00:08:38.483 "base_bdevs_list": [ 00:08:38.483 { 00:08:38.483 "name": "BaseBdev1", 00:08:38.483 "uuid": "b6f49bcd-70d0-4504-8f43-c87b48f2f4e4", 00:08:38.483 "is_configured": true, 00:08:38.483 "data_offset": 2048, 00:08:38.483 "data_size": 63488 00:08:38.483 }, 00:08:38.483 { 00:08:38.483 "name": "BaseBdev2", 00:08:38.483 "uuid": "56776845-359f-4589-b04c-4ad8a32a05a5", 00:08:38.483 "is_configured": true, 00:08:38.483 "data_offset": 2048, 00:08:38.483 "data_size": 63488 00:08:38.483 } 00:08:38.483 ] 00:08:38.483 }' 00:08:38.483 19:06:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:38.483 19:06:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.742 19:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:38.742 19:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:38.742 19:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:38.742 19:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:38.742 19:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:38.742 19:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:38.742 19:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:38.742 19:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:38.742 19:06:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.742 19:06:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.742 [2024-11-27 19:06:48.328374] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:38.742 19:06:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.742 19:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:38.742 "name": "Existed_Raid", 00:08:38.742 "aliases": [ 00:08:38.742 "e7d24197-2557-4a49-9153-51950753c060" 00:08:38.742 ], 00:08:38.742 "product_name": "Raid Volume", 00:08:38.742 "block_size": 512, 00:08:38.742 "num_blocks": 126976, 00:08:38.742 "uuid": "e7d24197-2557-4a49-9153-51950753c060", 00:08:38.742 "assigned_rate_limits": { 00:08:38.742 "rw_ios_per_sec": 0, 00:08:38.742 "rw_mbytes_per_sec": 0, 00:08:38.742 "r_mbytes_per_sec": 0, 00:08:38.742 "w_mbytes_per_sec": 0 00:08:38.742 }, 00:08:38.742 "claimed": false, 00:08:38.742 "zoned": false, 00:08:38.742 "supported_io_types": { 00:08:38.742 "read": true, 00:08:38.742 "write": true, 00:08:38.742 "unmap": true, 00:08:38.742 "flush": true, 00:08:38.742 "reset": true, 00:08:38.742 "nvme_admin": false, 00:08:38.742 "nvme_io": false, 00:08:38.742 "nvme_io_md": false, 00:08:38.742 "write_zeroes": true, 00:08:38.742 "zcopy": false, 00:08:38.742 "get_zone_info": false, 00:08:38.742 "zone_management": false, 00:08:38.742 "zone_append": false, 00:08:38.742 "compare": false, 00:08:38.742 "compare_and_write": false, 00:08:38.742 "abort": false, 00:08:38.742 "seek_hole": false, 00:08:38.742 "seek_data": false, 00:08:38.742 "copy": false, 00:08:38.742 "nvme_iov_md": false 00:08:38.742 }, 00:08:38.742 "memory_domains": [ 00:08:38.742 { 00:08:38.742 "dma_device_id": "system", 00:08:38.742 "dma_device_type": 1 00:08:38.742 }, 00:08:38.742 { 00:08:38.742 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:38.742 "dma_device_type": 2 00:08:38.742 }, 00:08:38.742 { 00:08:38.742 "dma_device_id": "system", 00:08:38.742 "dma_device_type": 1 00:08:38.742 }, 00:08:38.742 { 00:08:38.742 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:38.742 "dma_device_type": 2 00:08:38.742 } 00:08:38.742 ], 00:08:38.742 "driver_specific": { 00:08:38.742 "raid": { 00:08:38.742 "uuid": "e7d24197-2557-4a49-9153-51950753c060", 00:08:38.742 "strip_size_kb": 64, 00:08:38.742 "state": "online", 00:08:38.742 "raid_level": "concat", 00:08:38.742 "superblock": true, 00:08:38.742 "num_base_bdevs": 2, 00:08:38.742 "num_base_bdevs_discovered": 2, 00:08:38.742 "num_base_bdevs_operational": 2, 00:08:38.742 "base_bdevs_list": [ 00:08:38.742 { 00:08:38.742 "name": "BaseBdev1", 00:08:38.742 "uuid": "b6f49bcd-70d0-4504-8f43-c87b48f2f4e4", 00:08:38.742 "is_configured": true, 00:08:38.742 "data_offset": 2048, 00:08:38.742 "data_size": 63488 00:08:38.743 }, 00:08:38.743 { 00:08:38.743 "name": "BaseBdev2", 00:08:38.743 "uuid": "56776845-359f-4589-b04c-4ad8a32a05a5", 00:08:38.743 "is_configured": true, 00:08:38.743 "data_offset": 2048, 00:08:38.743 "data_size": 63488 00:08:38.743 } 00:08:38.743 ] 00:08:38.743 } 00:08:38.743 } 00:08:38.743 }' 00:08:38.743 19:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:39.002 19:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:39.002 BaseBdev2' 00:08:39.002 19:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:39.002 19:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:39.002 19:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:39.002 19:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:39.002 19:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:39.002 19:06:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.002 19:06:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.002 19:06:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.002 19:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:39.002 19:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:39.002 19:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:39.002 19:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:39.002 19:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:39.002 19:06:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.002 19:06:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.002 19:06:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.002 19:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:39.002 19:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:39.002 19:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:39.002 19:06:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.002 19:06:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.002 [2024-11-27 19:06:48.539772] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:39.002 [2024-11-27 19:06:48.539862] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:39.002 [2024-11-27 19:06:48.539929] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:39.262 19:06:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.262 19:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:39.262 19:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:08:39.262 19:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:39.262 19:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:08:39.262 19:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:39.262 19:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:08:39.262 19:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:39.262 19:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:39.262 19:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:39.262 19:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:39.262 19:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:39.262 19:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:39.262 19:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:39.262 19:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:39.262 19:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:39.262 19:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:39.262 19:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:39.262 19:06:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.262 19:06:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.262 19:06:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.262 19:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:39.262 "name": "Existed_Raid", 00:08:39.262 "uuid": "e7d24197-2557-4a49-9153-51950753c060", 00:08:39.262 "strip_size_kb": 64, 00:08:39.262 "state": "offline", 00:08:39.262 "raid_level": "concat", 00:08:39.262 "superblock": true, 00:08:39.262 "num_base_bdevs": 2, 00:08:39.262 "num_base_bdevs_discovered": 1, 00:08:39.262 "num_base_bdevs_operational": 1, 00:08:39.262 "base_bdevs_list": [ 00:08:39.262 { 00:08:39.262 "name": null, 00:08:39.262 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:39.262 "is_configured": false, 00:08:39.262 "data_offset": 0, 00:08:39.262 "data_size": 63488 00:08:39.262 }, 00:08:39.262 { 00:08:39.262 "name": "BaseBdev2", 00:08:39.262 "uuid": "56776845-359f-4589-b04c-4ad8a32a05a5", 00:08:39.262 "is_configured": true, 00:08:39.262 "data_offset": 2048, 00:08:39.262 "data_size": 63488 00:08:39.262 } 00:08:39.262 ] 00:08:39.262 }' 00:08:39.262 19:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:39.262 19:06:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.522 19:06:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:39.522 19:06:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:39.522 19:06:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:39.522 19:06:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:39.522 19:06:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.522 19:06:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.522 19:06:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.522 19:06:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:39.522 19:06:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:39.522 19:06:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:39.522 19:06:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.522 19:06:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.522 [2024-11-27 19:06:49.101626] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:39.522 [2024-11-27 19:06:49.101783] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:39.781 19:06:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.781 19:06:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:39.781 19:06:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:39.781 19:06:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:39.781 19:06:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:39.781 19:06:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.781 19:06:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.781 19:06:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.781 19:06:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:39.781 19:06:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:39.781 19:06:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:08:39.781 19:06:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 62043 00:08:39.781 19:06:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 62043 ']' 00:08:39.781 19:06:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 62043 00:08:39.781 19:06:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:08:39.781 19:06:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:39.781 19:06:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62043 00:08:39.781 killing process with pid 62043 00:08:39.781 19:06:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:39.781 19:06:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:39.781 19:06:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62043' 00:08:39.781 19:06:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 62043 00:08:39.781 [2024-11-27 19:06:49.290265] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:39.781 19:06:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 62043 00:08:39.781 [2024-11-27 19:06:49.308188] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:41.161 19:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:08:41.161 00:08:41.161 real 0m5.193s 00:08:41.161 user 0m7.367s 00:08:41.161 sys 0m0.900s 00:08:41.161 19:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:41.161 19:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:41.161 ************************************ 00:08:41.161 END TEST raid_state_function_test_sb 00:08:41.161 ************************************ 00:08:41.161 19:06:50 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:08:41.161 19:06:50 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:41.161 19:06:50 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:41.161 19:06:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:41.161 ************************************ 00:08:41.161 START TEST raid_superblock_test 00:08:41.161 ************************************ 00:08:41.161 19:06:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 2 00:08:41.161 19:06:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:08:41.161 19:06:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:08:41.161 19:06:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:08:41.161 19:06:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:08:41.161 19:06:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:08:41.161 19:06:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:08:41.161 19:06:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:08:41.161 19:06:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:08:41.161 19:06:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:08:41.161 19:06:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:08:41.161 19:06:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:08:41.161 19:06:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:08:41.162 19:06:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:08:41.162 19:06:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:08:41.162 19:06:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:08:41.162 19:06:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:08:41.162 19:06:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:08:41.162 19:06:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=62301 00:08:41.162 19:06:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 62301 00:08:41.162 19:06:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 62301 ']' 00:08:41.162 19:06:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:41.162 19:06:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:41.162 19:06:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:41.162 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:41.162 19:06:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:41.162 19:06:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.162 [2024-11-27 19:06:50.696931] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:08:41.162 [2024-11-27 19:06:50.697149] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62301 ] 00:08:41.422 [2024-11-27 19:06:50.857205] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:41.422 [2024-11-27 19:06:50.994946] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:41.682 [2024-11-27 19:06:51.234898] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:41.682 [2024-11-27 19:06:51.235089] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:41.942 19:06:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:41.942 19:06:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:08:41.942 19:06:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:08:41.942 19:06:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:41.942 19:06:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:08:41.942 19:06:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:08:41.942 19:06:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:41.942 19:06:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:41.942 19:06:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:41.942 19:06:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:41.942 19:06:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:08:41.942 19:06:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.942 19:06:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.942 malloc1 00:08:41.942 19:06:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.942 19:06:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:41.942 19:06:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.942 19:06:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.202 [2024-11-27 19:06:51.582232] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:42.202 [2024-11-27 19:06:51.582298] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:42.202 [2024-11-27 19:06:51.582321] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:42.202 [2024-11-27 19:06:51.582331] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:42.202 [2024-11-27 19:06:51.584913] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:42.202 [2024-11-27 19:06:51.584951] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:42.202 pt1 00:08:42.202 19:06:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.202 19:06:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:42.202 19:06:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:42.202 19:06:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:08:42.202 19:06:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:08:42.203 19:06:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:42.203 19:06:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:42.203 19:06:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:42.203 19:06:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:42.203 19:06:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:08:42.203 19:06:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.203 19:06:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.203 malloc2 00:08:42.203 19:06:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.203 19:06:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:42.203 19:06:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.203 19:06:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.203 [2024-11-27 19:06:51.647482] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:42.203 [2024-11-27 19:06:51.647591] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:42.203 [2024-11-27 19:06:51.647652] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:08:42.203 [2024-11-27 19:06:51.647686] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:42.203 [2024-11-27 19:06:51.650140] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:42.203 [2024-11-27 19:06:51.650219] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:42.203 pt2 00:08:42.203 19:06:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.203 19:06:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:42.203 19:06:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:42.203 19:06:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:08:42.203 19:06:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.203 19:06:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.203 [2024-11-27 19:06:51.659519] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:42.203 [2024-11-27 19:06:51.661629] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:42.203 [2024-11-27 19:06:51.661889] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:42.203 [2024-11-27 19:06:51.661938] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:42.203 [2024-11-27 19:06:51.662230] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:42.203 [2024-11-27 19:06:51.662448] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:42.203 [2024-11-27 19:06:51.662492] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:08:42.203 [2024-11-27 19:06:51.662723] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:42.203 19:06:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.203 19:06:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:42.203 19:06:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:42.203 19:06:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:42.203 19:06:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:42.203 19:06:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:42.203 19:06:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:42.203 19:06:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:42.203 19:06:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:42.203 19:06:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:42.203 19:06:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:42.203 19:06:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:42.203 19:06:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:42.203 19:06:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.203 19:06:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.203 19:06:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.203 19:06:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:42.203 "name": "raid_bdev1", 00:08:42.203 "uuid": "ffe4a204-6645-48a9-8916-384afe624fe9", 00:08:42.203 "strip_size_kb": 64, 00:08:42.203 "state": "online", 00:08:42.203 "raid_level": "concat", 00:08:42.203 "superblock": true, 00:08:42.203 "num_base_bdevs": 2, 00:08:42.203 "num_base_bdevs_discovered": 2, 00:08:42.203 "num_base_bdevs_operational": 2, 00:08:42.203 "base_bdevs_list": [ 00:08:42.203 { 00:08:42.203 "name": "pt1", 00:08:42.203 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:42.203 "is_configured": true, 00:08:42.203 "data_offset": 2048, 00:08:42.203 "data_size": 63488 00:08:42.203 }, 00:08:42.203 { 00:08:42.203 "name": "pt2", 00:08:42.203 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:42.203 "is_configured": true, 00:08:42.203 "data_offset": 2048, 00:08:42.203 "data_size": 63488 00:08:42.203 } 00:08:42.203 ] 00:08:42.203 }' 00:08:42.203 19:06:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:42.203 19:06:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.773 19:06:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:08:42.773 19:06:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:42.773 19:06:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:42.773 19:06:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:42.773 19:06:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:42.773 19:06:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:42.773 19:06:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:42.773 19:06:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.773 19:06:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.773 19:06:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:42.773 [2024-11-27 19:06:52.107183] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:42.773 19:06:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.773 19:06:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:42.773 "name": "raid_bdev1", 00:08:42.773 "aliases": [ 00:08:42.773 "ffe4a204-6645-48a9-8916-384afe624fe9" 00:08:42.773 ], 00:08:42.773 "product_name": "Raid Volume", 00:08:42.773 "block_size": 512, 00:08:42.773 "num_blocks": 126976, 00:08:42.773 "uuid": "ffe4a204-6645-48a9-8916-384afe624fe9", 00:08:42.773 "assigned_rate_limits": { 00:08:42.773 "rw_ios_per_sec": 0, 00:08:42.773 "rw_mbytes_per_sec": 0, 00:08:42.773 "r_mbytes_per_sec": 0, 00:08:42.773 "w_mbytes_per_sec": 0 00:08:42.773 }, 00:08:42.773 "claimed": false, 00:08:42.773 "zoned": false, 00:08:42.773 "supported_io_types": { 00:08:42.773 "read": true, 00:08:42.773 "write": true, 00:08:42.773 "unmap": true, 00:08:42.773 "flush": true, 00:08:42.773 "reset": true, 00:08:42.773 "nvme_admin": false, 00:08:42.773 "nvme_io": false, 00:08:42.773 "nvme_io_md": false, 00:08:42.773 "write_zeroes": true, 00:08:42.773 "zcopy": false, 00:08:42.773 "get_zone_info": false, 00:08:42.773 "zone_management": false, 00:08:42.773 "zone_append": false, 00:08:42.773 "compare": false, 00:08:42.773 "compare_and_write": false, 00:08:42.773 "abort": false, 00:08:42.773 "seek_hole": false, 00:08:42.773 "seek_data": false, 00:08:42.773 "copy": false, 00:08:42.773 "nvme_iov_md": false 00:08:42.773 }, 00:08:42.773 "memory_domains": [ 00:08:42.773 { 00:08:42.773 "dma_device_id": "system", 00:08:42.773 "dma_device_type": 1 00:08:42.773 }, 00:08:42.773 { 00:08:42.773 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:42.773 "dma_device_type": 2 00:08:42.773 }, 00:08:42.773 { 00:08:42.773 "dma_device_id": "system", 00:08:42.773 "dma_device_type": 1 00:08:42.773 }, 00:08:42.773 { 00:08:42.773 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:42.773 "dma_device_type": 2 00:08:42.773 } 00:08:42.773 ], 00:08:42.773 "driver_specific": { 00:08:42.773 "raid": { 00:08:42.773 "uuid": "ffe4a204-6645-48a9-8916-384afe624fe9", 00:08:42.773 "strip_size_kb": 64, 00:08:42.773 "state": "online", 00:08:42.773 "raid_level": "concat", 00:08:42.773 "superblock": true, 00:08:42.773 "num_base_bdevs": 2, 00:08:42.773 "num_base_bdevs_discovered": 2, 00:08:42.773 "num_base_bdevs_operational": 2, 00:08:42.773 "base_bdevs_list": [ 00:08:42.773 { 00:08:42.773 "name": "pt1", 00:08:42.773 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:42.773 "is_configured": true, 00:08:42.773 "data_offset": 2048, 00:08:42.773 "data_size": 63488 00:08:42.773 }, 00:08:42.773 { 00:08:42.773 "name": "pt2", 00:08:42.773 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:42.773 "is_configured": true, 00:08:42.773 "data_offset": 2048, 00:08:42.773 "data_size": 63488 00:08:42.773 } 00:08:42.773 ] 00:08:42.773 } 00:08:42.773 } 00:08:42.773 }' 00:08:42.773 19:06:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:42.773 19:06:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:42.773 pt2' 00:08:42.773 19:06:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:42.773 19:06:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:42.773 19:06:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:42.773 19:06:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:42.773 19:06:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:42.773 19:06:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.773 19:06:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.773 19:06:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.773 19:06:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:42.773 19:06:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:42.773 19:06:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:42.773 19:06:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:42.773 19:06:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:42.773 19:06:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.774 19:06:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.774 19:06:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.774 19:06:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:42.774 19:06:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:42.774 19:06:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:42.774 19:06:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:08:42.774 19:06:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.774 19:06:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.774 [2024-11-27 19:06:52.322662] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:42.774 19:06:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.774 19:06:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=ffe4a204-6645-48a9-8916-384afe624fe9 00:08:42.774 19:06:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z ffe4a204-6645-48a9-8916-384afe624fe9 ']' 00:08:42.774 19:06:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:42.774 19:06:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.774 19:06:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.774 [2024-11-27 19:06:52.366367] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:42.774 [2024-11-27 19:06:52.366487] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:42.774 [2024-11-27 19:06:52.366649] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:42.774 [2024-11-27 19:06:52.366754] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:42.774 [2024-11-27 19:06:52.366806] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:08:42.774 19:06:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.774 19:06:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:42.774 19:06:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:08:42.774 19:06:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.774 19:06:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.774 19:06:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.034 19:06:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:08:43.034 19:06:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:08:43.034 19:06:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:43.034 19:06:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:08:43.034 19:06:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.034 19:06:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.034 19:06:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.034 19:06:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:43.034 19:06:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:08:43.034 19:06:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.034 19:06:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.034 19:06:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.034 19:06:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:08:43.034 19:06:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.034 19:06:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.034 19:06:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:43.034 19:06:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.034 19:06:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:08:43.034 19:06:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:43.034 19:06:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:08:43.034 19:06:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:43.035 19:06:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:08:43.035 19:06:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:43.035 19:06:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:08:43.035 19:06:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:43.035 19:06:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:43.035 19:06:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.035 19:06:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.035 [2024-11-27 19:06:52.502109] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:43.035 [2024-11-27 19:06:52.504307] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:43.035 [2024-11-27 19:06:52.504428] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:08:43.035 [2024-11-27 19:06:52.504524] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:08:43.035 [2024-11-27 19:06:52.504577] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:43.035 [2024-11-27 19:06:52.504611] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:08:43.035 request: 00:08:43.035 { 00:08:43.035 "name": "raid_bdev1", 00:08:43.035 "raid_level": "concat", 00:08:43.035 "base_bdevs": [ 00:08:43.035 "malloc1", 00:08:43.035 "malloc2" 00:08:43.035 ], 00:08:43.035 "strip_size_kb": 64, 00:08:43.035 "superblock": false, 00:08:43.035 "method": "bdev_raid_create", 00:08:43.035 "req_id": 1 00:08:43.035 } 00:08:43.035 Got JSON-RPC error response 00:08:43.035 response: 00:08:43.035 { 00:08:43.035 "code": -17, 00:08:43.035 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:43.035 } 00:08:43.035 19:06:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:43.035 19:06:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:08:43.035 19:06:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:43.035 19:06:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:43.035 19:06:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:43.035 19:06:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:43.035 19:06:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.035 19:06:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:08:43.035 19:06:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.035 19:06:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.035 19:06:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:08:43.035 19:06:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:08:43.035 19:06:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:43.035 19:06:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.035 19:06:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.035 [2024-11-27 19:06:52.573962] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:43.035 [2024-11-27 19:06:52.574058] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:43.035 [2024-11-27 19:06:52.574100] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:08:43.035 [2024-11-27 19:06:52.574133] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:43.035 [2024-11-27 19:06:52.576596] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:43.035 [2024-11-27 19:06:52.576670] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:43.035 [2024-11-27 19:06:52.576809] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:43.035 [2024-11-27 19:06:52.576894] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:43.035 pt1 00:08:43.035 19:06:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.035 19:06:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:08:43.035 19:06:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:43.035 19:06:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:43.035 19:06:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:43.035 19:06:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:43.035 19:06:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:43.035 19:06:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:43.035 19:06:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:43.035 19:06:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:43.035 19:06:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:43.035 19:06:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:43.035 19:06:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:43.035 19:06:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.035 19:06:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.035 19:06:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.035 19:06:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:43.035 "name": "raid_bdev1", 00:08:43.035 "uuid": "ffe4a204-6645-48a9-8916-384afe624fe9", 00:08:43.035 "strip_size_kb": 64, 00:08:43.035 "state": "configuring", 00:08:43.035 "raid_level": "concat", 00:08:43.035 "superblock": true, 00:08:43.035 "num_base_bdevs": 2, 00:08:43.035 "num_base_bdevs_discovered": 1, 00:08:43.035 "num_base_bdevs_operational": 2, 00:08:43.035 "base_bdevs_list": [ 00:08:43.035 { 00:08:43.035 "name": "pt1", 00:08:43.035 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:43.035 "is_configured": true, 00:08:43.035 "data_offset": 2048, 00:08:43.035 "data_size": 63488 00:08:43.035 }, 00:08:43.035 { 00:08:43.035 "name": null, 00:08:43.035 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:43.035 "is_configured": false, 00:08:43.035 "data_offset": 2048, 00:08:43.035 "data_size": 63488 00:08:43.035 } 00:08:43.035 ] 00:08:43.035 }' 00:08:43.035 19:06:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:43.035 19:06:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.606 19:06:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:08:43.606 19:06:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:08:43.606 19:06:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:43.606 19:06:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:43.606 19:06:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.606 19:06:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.606 [2024-11-27 19:06:53.013265] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:43.606 [2024-11-27 19:06:53.013429] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:43.606 [2024-11-27 19:06:53.013474] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:08:43.606 [2024-11-27 19:06:53.013509] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:43.606 [2024-11-27 19:06:53.014098] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:43.606 [2024-11-27 19:06:53.014178] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:43.606 [2024-11-27 19:06:53.014313] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:43.606 [2024-11-27 19:06:53.014374] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:43.606 [2024-11-27 19:06:53.014546] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:43.606 [2024-11-27 19:06:53.014586] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:43.606 [2024-11-27 19:06:53.014902] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:43.606 [2024-11-27 19:06:53.015100] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:43.606 [2024-11-27 19:06:53.015139] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:43.606 [2024-11-27 19:06:53.015330] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:43.606 pt2 00:08:43.606 19:06:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.606 19:06:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:43.606 19:06:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:43.606 19:06:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:43.606 19:06:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:43.606 19:06:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:43.606 19:06:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:43.606 19:06:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:43.606 19:06:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:43.606 19:06:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:43.606 19:06:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:43.606 19:06:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:43.606 19:06:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:43.606 19:06:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:43.606 19:06:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:43.606 19:06:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.606 19:06:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.606 19:06:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.606 19:06:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:43.606 "name": "raid_bdev1", 00:08:43.606 "uuid": "ffe4a204-6645-48a9-8916-384afe624fe9", 00:08:43.606 "strip_size_kb": 64, 00:08:43.606 "state": "online", 00:08:43.606 "raid_level": "concat", 00:08:43.606 "superblock": true, 00:08:43.606 "num_base_bdevs": 2, 00:08:43.606 "num_base_bdevs_discovered": 2, 00:08:43.606 "num_base_bdevs_operational": 2, 00:08:43.606 "base_bdevs_list": [ 00:08:43.606 { 00:08:43.606 "name": "pt1", 00:08:43.606 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:43.606 "is_configured": true, 00:08:43.606 "data_offset": 2048, 00:08:43.606 "data_size": 63488 00:08:43.606 }, 00:08:43.606 { 00:08:43.606 "name": "pt2", 00:08:43.606 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:43.606 "is_configured": true, 00:08:43.606 "data_offset": 2048, 00:08:43.606 "data_size": 63488 00:08:43.606 } 00:08:43.606 ] 00:08:43.606 }' 00:08:43.606 19:06:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:43.606 19:06:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.866 19:06:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:08:43.866 19:06:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:43.866 19:06:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:43.866 19:06:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:43.866 19:06:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:43.866 19:06:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:43.866 19:06:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:43.866 19:06:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:43.866 19:06:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.866 19:06:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.866 [2024-11-27 19:06:53.456776] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:43.866 19:06:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.866 19:06:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:43.866 "name": "raid_bdev1", 00:08:43.866 "aliases": [ 00:08:43.866 "ffe4a204-6645-48a9-8916-384afe624fe9" 00:08:43.866 ], 00:08:43.866 "product_name": "Raid Volume", 00:08:43.866 "block_size": 512, 00:08:43.866 "num_blocks": 126976, 00:08:43.866 "uuid": "ffe4a204-6645-48a9-8916-384afe624fe9", 00:08:43.866 "assigned_rate_limits": { 00:08:43.866 "rw_ios_per_sec": 0, 00:08:43.866 "rw_mbytes_per_sec": 0, 00:08:43.866 "r_mbytes_per_sec": 0, 00:08:43.866 "w_mbytes_per_sec": 0 00:08:43.866 }, 00:08:43.866 "claimed": false, 00:08:43.867 "zoned": false, 00:08:43.867 "supported_io_types": { 00:08:43.867 "read": true, 00:08:43.867 "write": true, 00:08:43.867 "unmap": true, 00:08:43.867 "flush": true, 00:08:43.867 "reset": true, 00:08:43.867 "nvme_admin": false, 00:08:43.867 "nvme_io": false, 00:08:43.867 "nvme_io_md": false, 00:08:43.867 "write_zeroes": true, 00:08:43.867 "zcopy": false, 00:08:43.867 "get_zone_info": false, 00:08:43.867 "zone_management": false, 00:08:43.867 "zone_append": false, 00:08:43.867 "compare": false, 00:08:43.867 "compare_and_write": false, 00:08:43.867 "abort": false, 00:08:43.867 "seek_hole": false, 00:08:43.867 "seek_data": false, 00:08:43.867 "copy": false, 00:08:43.867 "nvme_iov_md": false 00:08:43.867 }, 00:08:43.867 "memory_domains": [ 00:08:43.867 { 00:08:43.867 "dma_device_id": "system", 00:08:43.867 "dma_device_type": 1 00:08:43.867 }, 00:08:43.867 { 00:08:43.867 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:43.867 "dma_device_type": 2 00:08:43.867 }, 00:08:43.867 { 00:08:43.867 "dma_device_id": "system", 00:08:43.867 "dma_device_type": 1 00:08:43.867 }, 00:08:43.867 { 00:08:43.867 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:43.867 "dma_device_type": 2 00:08:43.867 } 00:08:43.867 ], 00:08:43.867 "driver_specific": { 00:08:43.867 "raid": { 00:08:43.867 "uuid": "ffe4a204-6645-48a9-8916-384afe624fe9", 00:08:43.867 "strip_size_kb": 64, 00:08:43.867 "state": "online", 00:08:43.867 "raid_level": "concat", 00:08:43.867 "superblock": true, 00:08:43.867 "num_base_bdevs": 2, 00:08:43.867 "num_base_bdevs_discovered": 2, 00:08:43.867 "num_base_bdevs_operational": 2, 00:08:43.867 "base_bdevs_list": [ 00:08:43.867 { 00:08:43.867 "name": "pt1", 00:08:43.867 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:43.867 "is_configured": true, 00:08:43.867 "data_offset": 2048, 00:08:43.867 "data_size": 63488 00:08:43.867 }, 00:08:43.867 { 00:08:43.867 "name": "pt2", 00:08:43.867 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:43.867 "is_configured": true, 00:08:43.867 "data_offset": 2048, 00:08:43.867 "data_size": 63488 00:08:43.867 } 00:08:43.867 ] 00:08:43.867 } 00:08:43.867 } 00:08:43.867 }' 00:08:44.134 19:06:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:44.134 19:06:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:44.134 pt2' 00:08:44.134 19:06:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:44.134 19:06:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:44.134 19:06:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:44.134 19:06:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:44.134 19:06:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:44.134 19:06:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.134 19:06:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.134 19:06:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.134 19:06:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:44.134 19:06:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:44.134 19:06:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:44.134 19:06:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:44.134 19:06:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:44.134 19:06:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.134 19:06:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.134 19:06:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.134 19:06:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:44.134 19:06:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:44.134 19:06:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:44.134 19:06:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.134 19:06:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.134 19:06:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:08:44.134 [2024-11-27 19:06:53.688304] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:44.134 19:06:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.134 19:06:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' ffe4a204-6645-48a9-8916-384afe624fe9 '!=' ffe4a204-6645-48a9-8916-384afe624fe9 ']' 00:08:44.134 19:06:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:08:44.134 19:06:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:44.134 19:06:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:44.134 19:06:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 62301 00:08:44.134 19:06:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 62301 ']' 00:08:44.134 19:06:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 62301 00:08:44.134 19:06:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:08:44.134 19:06:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:44.134 19:06:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62301 00:08:44.134 19:06:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:44.134 19:06:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:44.134 19:06:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62301' 00:08:44.134 killing process with pid 62301 00:08:44.134 19:06:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 62301 00:08:44.134 [2024-11-27 19:06:53.764015] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:44.134 19:06:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 62301 00:08:44.134 [2024-11-27 19:06:53.764228] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:44.134 [2024-11-27 19:06:53.764289] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:44.134 [2024-11-27 19:06:53.764302] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:44.407 [2024-11-27 19:06:53.985376] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:45.787 19:06:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:08:45.787 00:08:45.787 real 0m4.608s 00:08:45.787 user 0m6.301s 00:08:45.787 sys 0m0.834s 00:08:45.787 19:06:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:45.787 19:06:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.787 ************************************ 00:08:45.787 END TEST raid_superblock_test 00:08:45.787 ************************************ 00:08:45.787 19:06:55 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 2 read 00:08:45.787 19:06:55 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:45.787 19:06:55 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:45.787 19:06:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:45.787 ************************************ 00:08:45.787 START TEST raid_read_error_test 00:08:45.787 ************************************ 00:08:45.787 19:06:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 2 read 00:08:45.787 19:06:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:08:45.787 19:06:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:45.787 19:06:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:08:45.787 19:06:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:45.787 19:06:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:45.787 19:06:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:45.787 19:06:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:45.787 19:06:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:45.787 19:06:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:45.787 19:06:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:45.787 19:06:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:45.787 19:06:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:45.787 19:06:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:45.787 19:06:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:45.787 19:06:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:45.787 19:06:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:45.787 19:06:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:45.787 19:06:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:45.787 19:06:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:08:45.787 19:06:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:45.787 19:06:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:45.787 19:06:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:45.787 19:06:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.YycaF1edQq 00:08:45.787 19:06:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62507 00:08:45.787 19:06:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:45.787 19:06:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62507 00:08:45.787 19:06:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 62507 ']' 00:08:45.787 19:06:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:45.787 19:06:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:45.787 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:45.787 19:06:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:45.787 19:06:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:45.787 19:06:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.787 [2024-11-27 19:06:55.396507] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:08:45.788 [2024-11-27 19:06:55.396633] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62507 ] 00:08:46.046 [2024-11-27 19:06:55.568755] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:46.305 [2024-11-27 19:06:55.708735] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:46.565 [2024-11-27 19:06:55.944534] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:46.565 [2024-11-27 19:06:55.944584] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:46.825 19:06:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:46.825 19:06:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:46.825 19:06:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:46.825 19:06:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:46.825 19:06:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.825 19:06:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.825 BaseBdev1_malloc 00:08:46.825 19:06:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.825 19:06:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:46.825 19:06:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.825 19:06:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.825 true 00:08:46.825 19:06:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.825 19:06:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:46.825 19:06:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.825 19:06:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.825 [2024-11-27 19:06:56.281409] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:46.825 [2024-11-27 19:06:56.281470] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:46.825 [2024-11-27 19:06:56.281491] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:46.825 [2024-11-27 19:06:56.281502] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:46.825 [2024-11-27 19:06:56.283927] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:46.825 [2024-11-27 19:06:56.283968] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:46.825 BaseBdev1 00:08:46.825 19:06:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.825 19:06:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:46.825 19:06:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:46.825 19:06:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.825 19:06:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.825 BaseBdev2_malloc 00:08:46.825 19:06:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.825 19:06:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:46.825 19:06:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.825 19:06:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.825 true 00:08:46.825 19:06:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.825 19:06:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:46.825 19:06:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.825 19:06:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.825 [2024-11-27 19:06:56.354197] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:46.825 [2024-11-27 19:06:56.354254] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:46.825 [2024-11-27 19:06:56.354270] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:46.825 [2024-11-27 19:06:56.354281] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:46.825 [2024-11-27 19:06:56.356748] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:46.825 [2024-11-27 19:06:56.356784] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:46.825 BaseBdev2 00:08:46.825 19:06:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.825 19:06:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:46.825 19:06:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.825 19:06:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.825 [2024-11-27 19:06:56.366269] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:46.825 [2024-11-27 19:06:56.368576] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:46.825 [2024-11-27 19:06:56.368800] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:46.825 [2024-11-27 19:06:56.368817] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:46.825 [2024-11-27 19:06:56.369059] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:46.825 [2024-11-27 19:06:56.369246] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:46.825 [2024-11-27 19:06:56.369259] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:46.825 [2024-11-27 19:06:56.369411] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:46.825 19:06:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.825 19:06:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:46.825 19:06:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:46.825 19:06:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:46.825 19:06:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:46.825 19:06:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:46.825 19:06:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:46.825 19:06:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:46.825 19:06:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:46.825 19:06:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:46.825 19:06:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:46.825 19:06:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:46.825 19:06:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:46.825 19:06:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.825 19:06:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.825 19:06:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.825 19:06:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:46.825 "name": "raid_bdev1", 00:08:46.825 "uuid": "c9481e0c-df19-4f10-842a-736e6ca9888a", 00:08:46.825 "strip_size_kb": 64, 00:08:46.825 "state": "online", 00:08:46.825 "raid_level": "concat", 00:08:46.825 "superblock": true, 00:08:46.825 "num_base_bdevs": 2, 00:08:46.825 "num_base_bdevs_discovered": 2, 00:08:46.825 "num_base_bdevs_operational": 2, 00:08:46.825 "base_bdevs_list": [ 00:08:46.825 { 00:08:46.825 "name": "BaseBdev1", 00:08:46.825 "uuid": "cf8307d0-b6d8-5ac9-afad-d6e670c198c2", 00:08:46.825 "is_configured": true, 00:08:46.825 "data_offset": 2048, 00:08:46.825 "data_size": 63488 00:08:46.825 }, 00:08:46.825 { 00:08:46.825 "name": "BaseBdev2", 00:08:46.825 "uuid": "2e6f8e92-44ad-5cfe-855b-fe88317141d7", 00:08:46.825 "is_configured": true, 00:08:46.825 "data_offset": 2048, 00:08:46.825 "data_size": 63488 00:08:46.825 } 00:08:46.825 ] 00:08:46.825 }' 00:08:46.825 19:06:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:46.826 19:06:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.395 19:06:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:47.395 19:06:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:47.395 [2024-11-27 19:06:56.926817] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:08:48.334 19:06:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:08:48.334 19:06:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.334 19:06:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.334 19:06:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.334 19:06:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:48.334 19:06:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:08:48.334 19:06:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:08:48.334 19:06:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:48.334 19:06:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:48.334 19:06:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:48.334 19:06:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:48.334 19:06:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:48.334 19:06:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:48.334 19:06:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:48.334 19:06:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:48.334 19:06:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:48.334 19:06:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:48.334 19:06:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:48.334 19:06:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:48.334 19:06:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.334 19:06:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.334 19:06:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.334 19:06:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:48.334 "name": "raid_bdev1", 00:08:48.334 "uuid": "c9481e0c-df19-4f10-842a-736e6ca9888a", 00:08:48.334 "strip_size_kb": 64, 00:08:48.334 "state": "online", 00:08:48.334 "raid_level": "concat", 00:08:48.334 "superblock": true, 00:08:48.334 "num_base_bdevs": 2, 00:08:48.334 "num_base_bdevs_discovered": 2, 00:08:48.334 "num_base_bdevs_operational": 2, 00:08:48.334 "base_bdevs_list": [ 00:08:48.334 { 00:08:48.334 "name": "BaseBdev1", 00:08:48.334 "uuid": "cf8307d0-b6d8-5ac9-afad-d6e670c198c2", 00:08:48.334 "is_configured": true, 00:08:48.334 "data_offset": 2048, 00:08:48.334 "data_size": 63488 00:08:48.334 }, 00:08:48.334 { 00:08:48.334 "name": "BaseBdev2", 00:08:48.334 "uuid": "2e6f8e92-44ad-5cfe-855b-fe88317141d7", 00:08:48.334 "is_configured": true, 00:08:48.334 "data_offset": 2048, 00:08:48.334 "data_size": 63488 00:08:48.334 } 00:08:48.334 ] 00:08:48.334 }' 00:08:48.334 19:06:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:48.334 19:06:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.903 19:06:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:48.903 19:06:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.903 19:06:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.903 [2024-11-27 19:06:58.255223] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:48.903 [2024-11-27 19:06:58.255274] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:48.903 [2024-11-27 19:06:58.258028] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:48.903 [2024-11-27 19:06:58.258125] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:48.903 [2024-11-27 19:06:58.258197] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:48.903 [2024-11-27 19:06:58.258266] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:48.903 { 00:08:48.903 "results": [ 00:08:48.903 { 00:08:48.903 "job": "raid_bdev1", 00:08:48.903 "core_mask": "0x1", 00:08:48.903 "workload": "randrw", 00:08:48.903 "percentage": 50, 00:08:48.903 "status": "finished", 00:08:48.903 "queue_depth": 1, 00:08:48.903 "io_size": 131072, 00:08:48.903 "runtime": 1.328865, 00:08:48.903 "iops": 14029.265576262449, 00:08:48.903 "mibps": 1753.6581970328061, 00:08:48.903 "io_failed": 1, 00:08:48.903 "io_timeout": 0, 00:08:48.903 "avg_latency_us": 99.93358098277166, 00:08:48.903 "min_latency_us": 25.4882096069869, 00:08:48.903 "max_latency_us": 1488.1537117903931 00:08:48.903 } 00:08:48.903 ], 00:08:48.903 "core_count": 1 00:08:48.903 } 00:08:48.903 19:06:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.903 19:06:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62507 00:08:48.903 19:06:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 62507 ']' 00:08:48.903 19:06:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 62507 00:08:48.903 19:06:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:08:48.903 19:06:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:48.903 19:06:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62507 00:08:48.903 19:06:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:48.903 19:06:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:48.903 killing process with pid 62507 00:08:48.903 19:06:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62507' 00:08:48.903 19:06:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 62507 00:08:48.903 [2024-11-27 19:06:58.301659] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:48.903 19:06:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 62507 00:08:48.903 [2024-11-27 19:06:58.449551] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:50.281 19:06:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.YycaF1edQq 00:08:50.281 19:06:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:50.281 19:06:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:50.281 19:06:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.75 00:08:50.281 19:06:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:08:50.281 ************************************ 00:08:50.281 END TEST raid_read_error_test 00:08:50.281 ************************************ 00:08:50.281 19:06:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:50.281 19:06:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:50.281 19:06:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.75 != \0\.\0\0 ]] 00:08:50.281 00:08:50.281 real 0m4.454s 00:08:50.281 user 0m5.163s 00:08:50.281 sys 0m0.660s 00:08:50.281 19:06:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:50.281 19:06:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.281 19:06:59 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 2 write 00:08:50.281 19:06:59 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:50.281 19:06:59 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:50.281 19:06:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:50.281 ************************************ 00:08:50.281 START TEST raid_write_error_test 00:08:50.281 ************************************ 00:08:50.281 19:06:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 2 write 00:08:50.281 19:06:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:08:50.281 19:06:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:50.281 19:06:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:08:50.281 19:06:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:50.281 19:06:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:50.281 19:06:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:50.281 19:06:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:50.281 19:06:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:50.281 19:06:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:50.281 19:06:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:50.281 19:06:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:50.281 19:06:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:50.281 19:06:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:50.281 19:06:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:50.281 19:06:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:50.281 19:06:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:50.281 19:06:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:50.281 19:06:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:50.281 19:06:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:08:50.281 19:06:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:50.281 19:06:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:50.281 19:06:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:50.281 19:06:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.qEgZBOElfi 00:08:50.281 19:06:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62652 00:08:50.281 19:06:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62652 00:08:50.281 19:06:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:50.282 19:06:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 62652 ']' 00:08:50.282 19:06:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:50.282 19:06:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:50.282 19:06:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:50.282 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:50.282 19:06:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:50.282 19:06:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.282 [2024-11-27 19:06:59.914851] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:08:50.282 [2024-11-27 19:06:59.915470] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62652 ] 00:08:50.541 [2024-11-27 19:07:00.089487] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:50.801 [2024-11-27 19:07:00.232443] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:51.060 [2024-11-27 19:07:00.471536] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:51.060 [2024-11-27 19:07:00.471777] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:51.320 19:07:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:51.320 19:07:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:51.320 19:07:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:51.320 19:07:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:51.320 19:07:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.320 19:07:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.320 BaseBdev1_malloc 00:08:51.320 19:07:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.320 19:07:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:51.320 19:07:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.320 19:07:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.320 true 00:08:51.320 19:07:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.320 19:07:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:51.320 19:07:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.320 19:07:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.320 [2024-11-27 19:07:00.792863] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:51.320 [2024-11-27 19:07:00.792929] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:51.320 [2024-11-27 19:07:00.792953] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:51.321 [2024-11-27 19:07:00.792966] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:51.321 [2024-11-27 19:07:00.795429] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:51.321 [2024-11-27 19:07:00.795473] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:51.321 BaseBdev1 00:08:51.321 19:07:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.321 19:07:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:51.321 19:07:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:51.321 19:07:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.321 19:07:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.321 BaseBdev2_malloc 00:08:51.321 19:07:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.321 19:07:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:51.321 19:07:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.321 19:07:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.321 true 00:08:51.321 19:07:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.321 19:07:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:51.321 19:07:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.321 19:07:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.321 [2024-11-27 19:07:00.867404] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:51.321 [2024-11-27 19:07:00.867547] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:51.321 [2024-11-27 19:07:00.867592] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:51.321 [2024-11-27 19:07:00.867637] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:51.321 [2024-11-27 19:07:00.870253] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:51.321 [2024-11-27 19:07:00.870336] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:51.321 BaseBdev2 00:08:51.321 19:07:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.321 19:07:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:51.321 19:07:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.321 19:07:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.321 [2024-11-27 19:07:00.879462] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:51.321 [2024-11-27 19:07:00.881624] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:51.321 [2024-11-27 19:07:00.881878] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:51.321 [2024-11-27 19:07:00.881933] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:51.321 [2024-11-27 19:07:00.882226] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:51.321 [2024-11-27 19:07:00.882465] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:51.321 [2024-11-27 19:07:00.882513] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:51.321 [2024-11-27 19:07:00.882734] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:51.321 19:07:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.321 19:07:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:51.321 19:07:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:51.321 19:07:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:51.321 19:07:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:51.321 19:07:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:51.321 19:07:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:51.321 19:07:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:51.321 19:07:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:51.321 19:07:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:51.321 19:07:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:51.321 19:07:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:51.321 19:07:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:51.321 19:07:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.321 19:07:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.321 19:07:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.321 19:07:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:51.321 "name": "raid_bdev1", 00:08:51.321 "uuid": "304c15c0-ad62-4220-9646-4e1909b40c06", 00:08:51.321 "strip_size_kb": 64, 00:08:51.321 "state": "online", 00:08:51.321 "raid_level": "concat", 00:08:51.321 "superblock": true, 00:08:51.321 "num_base_bdevs": 2, 00:08:51.321 "num_base_bdevs_discovered": 2, 00:08:51.321 "num_base_bdevs_operational": 2, 00:08:51.321 "base_bdevs_list": [ 00:08:51.321 { 00:08:51.321 "name": "BaseBdev1", 00:08:51.321 "uuid": "1df27a9e-e87a-544f-8e5b-a4d6f9b82521", 00:08:51.321 "is_configured": true, 00:08:51.321 "data_offset": 2048, 00:08:51.321 "data_size": 63488 00:08:51.321 }, 00:08:51.321 { 00:08:51.321 "name": "BaseBdev2", 00:08:51.321 "uuid": "6f3f0303-7558-56a7-b350-1ed4722a47ed", 00:08:51.321 "is_configured": true, 00:08:51.321 "data_offset": 2048, 00:08:51.321 "data_size": 63488 00:08:51.321 } 00:08:51.321 ] 00:08:51.321 }' 00:08:51.321 19:07:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:51.321 19:07:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.890 19:07:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:51.890 19:07:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:51.890 [2024-11-27 19:07:01.435996] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:08:52.839 19:07:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:08:52.839 19:07:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.839 19:07:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.839 19:07:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.839 19:07:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:52.839 19:07:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:08:52.839 19:07:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:08:52.839 19:07:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:52.839 19:07:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:52.839 19:07:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:52.839 19:07:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:52.839 19:07:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:52.839 19:07:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:52.839 19:07:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:52.839 19:07:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:52.839 19:07:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:52.839 19:07:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:52.839 19:07:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:52.839 19:07:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:52.839 19:07:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.839 19:07:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.839 19:07:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.839 19:07:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:52.839 "name": "raid_bdev1", 00:08:52.839 "uuid": "304c15c0-ad62-4220-9646-4e1909b40c06", 00:08:52.839 "strip_size_kb": 64, 00:08:52.839 "state": "online", 00:08:52.839 "raid_level": "concat", 00:08:52.839 "superblock": true, 00:08:52.839 "num_base_bdevs": 2, 00:08:52.839 "num_base_bdevs_discovered": 2, 00:08:52.839 "num_base_bdevs_operational": 2, 00:08:52.839 "base_bdevs_list": [ 00:08:52.839 { 00:08:52.839 "name": "BaseBdev1", 00:08:52.839 "uuid": "1df27a9e-e87a-544f-8e5b-a4d6f9b82521", 00:08:52.839 "is_configured": true, 00:08:52.839 "data_offset": 2048, 00:08:52.839 "data_size": 63488 00:08:52.839 }, 00:08:52.839 { 00:08:52.839 "name": "BaseBdev2", 00:08:52.839 "uuid": "6f3f0303-7558-56a7-b350-1ed4722a47ed", 00:08:52.839 "is_configured": true, 00:08:52.839 "data_offset": 2048, 00:08:52.839 "data_size": 63488 00:08:52.839 } 00:08:52.839 ] 00:08:52.839 }' 00:08:52.839 19:07:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:52.839 19:07:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.408 19:07:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:53.408 19:07:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.408 19:07:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.408 [2024-11-27 19:07:02.845083] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:53.408 [2024-11-27 19:07:02.845211] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:53.408 [2024-11-27 19:07:02.848000] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:53.408 [2024-11-27 19:07:02.848099] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:53.408 [2024-11-27 19:07:02.848159] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:53.408 [2024-11-27 19:07:02.848211] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:53.408 { 00:08:53.408 "results": [ 00:08:53.408 { 00:08:53.408 "job": "raid_bdev1", 00:08:53.408 "core_mask": "0x1", 00:08:53.408 "workload": "randrw", 00:08:53.408 "percentage": 50, 00:08:53.408 "status": "finished", 00:08:53.408 "queue_depth": 1, 00:08:53.408 "io_size": 131072, 00:08:53.408 "runtime": 1.409944, 00:08:53.408 "iops": 14140.987159773722, 00:08:53.408 "mibps": 1767.6233949717152, 00:08:53.408 "io_failed": 1, 00:08:53.408 "io_timeout": 0, 00:08:53.408 "avg_latency_us": 99.04567428473437, 00:08:53.408 "min_latency_us": 25.4882096069869, 00:08:53.408 "max_latency_us": 1395.1441048034935 00:08:53.408 } 00:08:53.408 ], 00:08:53.408 "core_count": 1 00:08:53.408 } 00:08:53.408 19:07:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.408 19:07:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62652 00:08:53.408 19:07:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 62652 ']' 00:08:53.408 19:07:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 62652 00:08:53.408 19:07:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:08:53.408 19:07:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:53.408 19:07:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62652 00:08:53.408 killing process with pid 62652 00:08:53.408 19:07:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:53.408 19:07:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:53.408 19:07:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62652' 00:08:53.408 19:07:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 62652 00:08:53.408 [2024-11-27 19:07:02.897653] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:53.409 19:07:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 62652 00:08:53.668 [2024-11-27 19:07:03.047148] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:55.048 19:07:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.qEgZBOElfi 00:08:55.048 19:07:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:55.048 19:07:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:55.048 19:07:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:08:55.048 19:07:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:08:55.048 19:07:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:55.048 19:07:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:55.048 19:07:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:08:55.048 00:08:55.048 real 0m4.520s 00:08:55.048 user 0m5.277s 00:08:55.048 sys 0m0.659s 00:08:55.048 19:07:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:55.048 19:07:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.048 ************************************ 00:08:55.048 END TEST raid_write_error_test 00:08:55.048 ************************************ 00:08:55.048 19:07:04 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:55.048 19:07:04 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:08:55.048 19:07:04 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:55.048 19:07:04 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:55.048 19:07:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:55.048 ************************************ 00:08:55.048 START TEST raid_state_function_test 00:08:55.048 ************************************ 00:08:55.048 19:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 false 00:08:55.048 19:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:08:55.048 19:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:08:55.048 19:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:55.048 19:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:55.048 19:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:55.049 19:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:55.049 19:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:55.049 19:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:55.049 19:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:55.049 19:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:55.049 19:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:55.049 19:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:55.049 19:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:55.049 19:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:55.049 19:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:55.049 19:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:55.049 19:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:55.049 19:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:55.049 19:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:08:55.049 19:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:08:55.049 19:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:55.049 19:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:55.049 Process raid pid: 62796 00:08:55.049 19:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=62796 00:08:55.049 19:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:55.049 19:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62796' 00:08:55.049 19:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 62796 00:08:55.049 19:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 62796 ']' 00:08:55.049 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:55.049 19:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:55.049 19:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:55.049 19:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:55.049 19:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:55.049 19:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.049 [2024-11-27 19:07:04.497965] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:08:55.049 [2024-11-27 19:07:04.498204] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:55.049 [2024-11-27 19:07:04.663686] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:55.309 [2024-11-27 19:07:04.803020] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:55.568 [2024-11-27 19:07:05.045494] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:55.569 [2024-11-27 19:07:05.045531] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:55.828 19:07:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:55.828 19:07:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:08:55.828 19:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:55.828 19:07:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.828 19:07:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.828 [2024-11-27 19:07:05.332412] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:55.828 [2024-11-27 19:07:05.332483] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:55.828 [2024-11-27 19:07:05.332494] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:55.829 [2024-11-27 19:07:05.332503] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:55.829 19:07:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.829 19:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:55.829 19:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:55.829 19:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:55.829 19:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:55.829 19:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:55.829 19:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:55.829 19:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:55.829 19:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:55.829 19:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:55.829 19:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:55.829 19:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:55.829 19:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:55.829 19:07:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.829 19:07:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.829 19:07:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.829 19:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:55.829 "name": "Existed_Raid", 00:08:55.829 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:55.829 "strip_size_kb": 0, 00:08:55.829 "state": "configuring", 00:08:55.829 "raid_level": "raid1", 00:08:55.829 "superblock": false, 00:08:55.829 "num_base_bdevs": 2, 00:08:55.829 "num_base_bdevs_discovered": 0, 00:08:55.829 "num_base_bdevs_operational": 2, 00:08:55.829 "base_bdevs_list": [ 00:08:55.829 { 00:08:55.829 "name": "BaseBdev1", 00:08:55.829 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:55.829 "is_configured": false, 00:08:55.829 "data_offset": 0, 00:08:55.829 "data_size": 0 00:08:55.829 }, 00:08:55.829 { 00:08:55.829 "name": "BaseBdev2", 00:08:55.829 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:55.829 "is_configured": false, 00:08:55.829 "data_offset": 0, 00:08:55.829 "data_size": 0 00:08:55.829 } 00:08:55.829 ] 00:08:55.829 }' 00:08:55.829 19:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:55.829 19:07:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.438 19:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:56.438 19:07:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.438 19:07:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.438 [2024-11-27 19:07:05.775671] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:56.438 [2024-11-27 19:07:05.775786] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:56.438 19:07:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.438 19:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:56.438 19:07:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.438 19:07:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.438 [2024-11-27 19:07:05.783632] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:56.438 [2024-11-27 19:07:05.783728] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:56.438 [2024-11-27 19:07:05.783757] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:56.438 [2024-11-27 19:07:05.783791] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:56.438 19:07:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.438 19:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:56.438 19:07:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.438 19:07:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.438 [2024-11-27 19:07:05.836141] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:56.438 BaseBdev1 00:08:56.438 19:07:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.438 19:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:56.438 19:07:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:56.438 19:07:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:56.438 19:07:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:56.438 19:07:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:56.438 19:07:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:56.438 19:07:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:56.438 19:07:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.438 19:07:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.438 19:07:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.438 19:07:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:56.438 19:07:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.438 19:07:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.438 [ 00:08:56.438 { 00:08:56.438 "name": "BaseBdev1", 00:08:56.438 "aliases": [ 00:08:56.438 "5b39079c-18cd-4641-b171-2e496f285039" 00:08:56.438 ], 00:08:56.438 "product_name": "Malloc disk", 00:08:56.438 "block_size": 512, 00:08:56.438 "num_blocks": 65536, 00:08:56.438 "uuid": "5b39079c-18cd-4641-b171-2e496f285039", 00:08:56.438 "assigned_rate_limits": { 00:08:56.438 "rw_ios_per_sec": 0, 00:08:56.438 "rw_mbytes_per_sec": 0, 00:08:56.438 "r_mbytes_per_sec": 0, 00:08:56.438 "w_mbytes_per_sec": 0 00:08:56.438 }, 00:08:56.438 "claimed": true, 00:08:56.438 "claim_type": "exclusive_write", 00:08:56.438 "zoned": false, 00:08:56.438 "supported_io_types": { 00:08:56.438 "read": true, 00:08:56.438 "write": true, 00:08:56.438 "unmap": true, 00:08:56.438 "flush": true, 00:08:56.438 "reset": true, 00:08:56.438 "nvme_admin": false, 00:08:56.438 "nvme_io": false, 00:08:56.438 "nvme_io_md": false, 00:08:56.438 "write_zeroes": true, 00:08:56.438 "zcopy": true, 00:08:56.438 "get_zone_info": false, 00:08:56.438 "zone_management": false, 00:08:56.438 "zone_append": false, 00:08:56.438 "compare": false, 00:08:56.438 "compare_and_write": false, 00:08:56.438 "abort": true, 00:08:56.438 "seek_hole": false, 00:08:56.438 "seek_data": false, 00:08:56.438 "copy": true, 00:08:56.438 "nvme_iov_md": false 00:08:56.438 }, 00:08:56.438 "memory_domains": [ 00:08:56.438 { 00:08:56.438 "dma_device_id": "system", 00:08:56.438 "dma_device_type": 1 00:08:56.438 }, 00:08:56.438 { 00:08:56.438 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:56.438 "dma_device_type": 2 00:08:56.438 } 00:08:56.438 ], 00:08:56.438 "driver_specific": {} 00:08:56.438 } 00:08:56.438 ] 00:08:56.438 19:07:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.438 19:07:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:56.438 19:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:56.438 19:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:56.438 19:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:56.438 19:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:56.438 19:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:56.438 19:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:56.438 19:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:56.439 19:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:56.439 19:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:56.439 19:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:56.439 19:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:56.439 19:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:56.439 19:07:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.439 19:07:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.439 19:07:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.439 19:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:56.439 "name": "Existed_Raid", 00:08:56.439 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:56.439 "strip_size_kb": 0, 00:08:56.439 "state": "configuring", 00:08:56.439 "raid_level": "raid1", 00:08:56.439 "superblock": false, 00:08:56.439 "num_base_bdevs": 2, 00:08:56.439 "num_base_bdevs_discovered": 1, 00:08:56.439 "num_base_bdevs_operational": 2, 00:08:56.439 "base_bdevs_list": [ 00:08:56.439 { 00:08:56.439 "name": "BaseBdev1", 00:08:56.439 "uuid": "5b39079c-18cd-4641-b171-2e496f285039", 00:08:56.439 "is_configured": true, 00:08:56.439 "data_offset": 0, 00:08:56.439 "data_size": 65536 00:08:56.439 }, 00:08:56.439 { 00:08:56.439 "name": "BaseBdev2", 00:08:56.439 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:56.439 "is_configured": false, 00:08:56.439 "data_offset": 0, 00:08:56.439 "data_size": 0 00:08:56.439 } 00:08:56.439 ] 00:08:56.439 }' 00:08:56.439 19:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:56.439 19:07:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.698 19:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:56.698 19:07:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.698 19:07:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.698 [2024-11-27 19:07:06.295468] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:56.698 [2024-11-27 19:07:06.295540] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:56.698 19:07:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.698 19:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:56.698 19:07:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.698 19:07:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.698 [2024-11-27 19:07:06.303478] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:56.698 [2024-11-27 19:07:06.305750] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:56.698 [2024-11-27 19:07:06.305832] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:56.698 19:07:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.698 19:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:56.698 19:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:56.698 19:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:56.698 19:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:56.698 19:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:56.698 19:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:56.698 19:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:56.698 19:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:56.698 19:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:56.698 19:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:56.698 19:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:56.698 19:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:56.698 19:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:56.698 19:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:56.698 19:07:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.698 19:07:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.958 19:07:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.958 19:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:56.958 "name": "Existed_Raid", 00:08:56.958 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:56.958 "strip_size_kb": 0, 00:08:56.958 "state": "configuring", 00:08:56.958 "raid_level": "raid1", 00:08:56.958 "superblock": false, 00:08:56.958 "num_base_bdevs": 2, 00:08:56.958 "num_base_bdevs_discovered": 1, 00:08:56.958 "num_base_bdevs_operational": 2, 00:08:56.958 "base_bdevs_list": [ 00:08:56.958 { 00:08:56.958 "name": "BaseBdev1", 00:08:56.958 "uuid": "5b39079c-18cd-4641-b171-2e496f285039", 00:08:56.958 "is_configured": true, 00:08:56.958 "data_offset": 0, 00:08:56.958 "data_size": 65536 00:08:56.958 }, 00:08:56.958 { 00:08:56.958 "name": "BaseBdev2", 00:08:56.958 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:56.958 "is_configured": false, 00:08:56.958 "data_offset": 0, 00:08:56.958 "data_size": 0 00:08:56.958 } 00:08:56.958 ] 00:08:56.958 }' 00:08:56.958 19:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:56.958 19:07:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.219 19:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:57.219 19:07:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.219 19:07:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.219 [2024-11-27 19:07:06.810820] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:57.219 [2024-11-27 19:07:06.811017] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:57.219 [2024-11-27 19:07:06.811032] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:08:57.219 [2024-11-27 19:07:06.811427] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:57.219 [2024-11-27 19:07:06.811636] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:57.219 [2024-11-27 19:07:06.811651] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:57.219 [2024-11-27 19:07:06.811980] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:57.219 BaseBdev2 00:08:57.219 19:07:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.219 19:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:57.219 19:07:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:57.219 19:07:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:57.219 19:07:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:57.219 19:07:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:57.219 19:07:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:57.219 19:07:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:57.219 19:07:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.219 19:07:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.219 19:07:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.219 19:07:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:57.219 19:07:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.219 19:07:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.219 [ 00:08:57.219 { 00:08:57.219 "name": "BaseBdev2", 00:08:57.219 "aliases": [ 00:08:57.219 "ba480043-a6c8-49d6-8693-9dcb13f720b3" 00:08:57.219 ], 00:08:57.219 "product_name": "Malloc disk", 00:08:57.219 "block_size": 512, 00:08:57.219 "num_blocks": 65536, 00:08:57.219 "uuid": "ba480043-a6c8-49d6-8693-9dcb13f720b3", 00:08:57.219 "assigned_rate_limits": { 00:08:57.219 "rw_ios_per_sec": 0, 00:08:57.219 "rw_mbytes_per_sec": 0, 00:08:57.219 "r_mbytes_per_sec": 0, 00:08:57.219 "w_mbytes_per_sec": 0 00:08:57.219 }, 00:08:57.219 "claimed": true, 00:08:57.219 "claim_type": "exclusive_write", 00:08:57.219 "zoned": false, 00:08:57.219 "supported_io_types": { 00:08:57.219 "read": true, 00:08:57.219 "write": true, 00:08:57.219 "unmap": true, 00:08:57.219 "flush": true, 00:08:57.219 "reset": true, 00:08:57.219 "nvme_admin": false, 00:08:57.219 "nvme_io": false, 00:08:57.219 "nvme_io_md": false, 00:08:57.219 "write_zeroes": true, 00:08:57.219 "zcopy": true, 00:08:57.219 "get_zone_info": false, 00:08:57.219 "zone_management": false, 00:08:57.219 "zone_append": false, 00:08:57.219 "compare": false, 00:08:57.219 "compare_and_write": false, 00:08:57.219 "abort": true, 00:08:57.219 "seek_hole": false, 00:08:57.219 "seek_data": false, 00:08:57.219 "copy": true, 00:08:57.219 "nvme_iov_md": false 00:08:57.219 }, 00:08:57.219 "memory_domains": [ 00:08:57.219 { 00:08:57.219 "dma_device_id": "system", 00:08:57.219 "dma_device_type": 1 00:08:57.219 }, 00:08:57.219 { 00:08:57.219 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:57.219 "dma_device_type": 2 00:08:57.219 } 00:08:57.219 ], 00:08:57.219 "driver_specific": {} 00:08:57.219 } 00:08:57.219 ] 00:08:57.219 19:07:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.219 19:07:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:57.219 19:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:57.219 19:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:57.219 19:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:08:57.219 19:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:57.219 19:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:57.219 19:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:57.219 19:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:57.219 19:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:57.219 19:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:57.219 19:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:57.219 19:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:57.219 19:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:57.479 19:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:57.479 19:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:57.479 19:07:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.479 19:07:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.479 19:07:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.479 19:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:57.479 "name": "Existed_Raid", 00:08:57.479 "uuid": "695955fc-d8e7-4c37-a2dd-ec29f512ac3c", 00:08:57.479 "strip_size_kb": 0, 00:08:57.479 "state": "online", 00:08:57.479 "raid_level": "raid1", 00:08:57.479 "superblock": false, 00:08:57.479 "num_base_bdevs": 2, 00:08:57.479 "num_base_bdevs_discovered": 2, 00:08:57.479 "num_base_bdevs_operational": 2, 00:08:57.479 "base_bdevs_list": [ 00:08:57.479 { 00:08:57.479 "name": "BaseBdev1", 00:08:57.479 "uuid": "5b39079c-18cd-4641-b171-2e496f285039", 00:08:57.479 "is_configured": true, 00:08:57.479 "data_offset": 0, 00:08:57.479 "data_size": 65536 00:08:57.479 }, 00:08:57.479 { 00:08:57.479 "name": "BaseBdev2", 00:08:57.479 "uuid": "ba480043-a6c8-49d6-8693-9dcb13f720b3", 00:08:57.479 "is_configured": true, 00:08:57.479 "data_offset": 0, 00:08:57.479 "data_size": 65536 00:08:57.479 } 00:08:57.479 ] 00:08:57.479 }' 00:08:57.479 19:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:57.479 19:07:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.739 19:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:57.739 19:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:57.739 19:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:57.739 19:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:57.739 19:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:57.739 19:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:57.739 19:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:57.739 19:07:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.739 19:07:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.739 19:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:57.739 [2024-11-27 19:07:07.306229] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:57.739 19:07:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.739 19:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:57.739 "name": "Existed_Raid", 00:08:57.739 "aliases": [ 00:08:57.739 "695955fc-d8e7-4c37-a2dd-ec29f512ac3c" 00:08:57.739 ], 00:08:57.739 "product_name": "Raid Volume", 00:08:57.739 "block_size": 512, 00:08:57.739 "num_blocks": 65536, 00:08:57.739 "uuid": "695955fc-d8e7-4c37-a2dd-ec29f512ac3c", 00:08:57.739 "assigned_rate_limits": { 00:08:57.739 "rw_ios_per_sec": 0, 00:08:57.739 "rw_mbytes_per_sec": 0, 00:08:57.739 "r_mbytes_per_sec": 0, 00:08:57.739 "w_mbytes_per_sec": 0 00:08:57.739 }, 00:08:57.740 "claimed": false, 00:08:57.740 "zoned": false, 00:08:57.740 "supported_io_types": { 00:08:57.740 "read": true, 00:08:57.740 "write": true, 00:08:57.740 "unmap": false, 00:08:57.740 "flush": false, 00:08:57.740 "reset": true, 00:08:57.740 "nvme_admin": false, 00:08:57.740 "nvme_io": false, 00:08:57.740 "nvme_io_md": false, 00:08:57.740 "write_zeroes": true, 00:08:57.740 "zcopy": false, 00:08:57.740 "get_zone_info": false, 00:08:57.740 "zone_management": false, 00:08:57.740 "zone_append": false, 00:08:57.740 "compare": false, 00:08:57.740 "compare_and_write": false, 00:08:57.740 "abort": false, 00:08:57.740 "seek_hole": false, 00:08:57.740 "seek_data": false, 00:08:57.740 "copy": false, 00:08:57.740 "nvme_iov_md": false 00:08:57.740 }, 00:08:57.740 "memory_domains": [ 00:08:57.740 { 00:08:57.740 "dma_device_id": "system", 00:08:57.740 "dma_device_type": 1 00:08:57.740 }, 00:08:57.740 { 00:08:57.740 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:57.740 "dma_device_type": 2 00:08:57.740 }, 00:08:57.740 { 00:08:57.740 "dma_device_id": "system", 00:08:57.740 "dma_device_type": 1 00:08:57.740 }, 00:08:57.740 { 00:08:57.740 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:57.740 "dma_device_type": 2 00:08:57.740 } 00:08:57.740 ], 00:08:57.740 "driver_specific": { 00:08:57.740 "raid": { 00:08:57.740 "uuid": "695955fc-d8e7-4c37-a2dd-ec29f512ac3c", 00:08:57.740 "strip_size_kb": 0, 00:08:57.740 "state": "online", 00:08:57.740 "raid_level": "raid1", 00:08:57.740 "superblock": false, 00:08:57.740 "num_base_bdevs": 2, 00:08:57.740 "num_base_bdevs_discovered": 2, 00:08:57.740 "num_base_bdevs_operational": 2, 00:08:57.740 "base_bdevs_list": [ 00:08:57.740 { 00:08:57.740 "name": "BaseBdev1", 00:08:57.740 "uuid": "5b39079c-18cd-4641-b171-2e496f285039", 00:08:57.740 "is_configured": true, 00:08:57.740 "data_offset": 0, 00:08:57.740 "data_size": 65536 00:08:57.740 }, 00:08:57.740 { 00:08:57.740 "name": "BaseBdev2", 00:08:57.740 "uuid": "ba480043-a6c8-49d6-8693-9dcb13f720b3", 00:08:57.740 "is_configured": true, 00:08:57.740 "data_offset": 0, 00:08:57.740 "data_size": 65536 00:08:57.740 } 00:08:57.740 ] 00:08:57.740 } 00:08:57.740 } 00:08:57.740 }' 00:08:57.740 19:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:57.999 19:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:57.999 BaseBdev2' 00:08:57.999 19:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:57.999 19:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:57.999 19:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:58.000 19:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:58.000 19:07:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.000 19:07:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.000 19:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:58.000 19:07:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.000 19:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:58.000 19:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:58.000 19:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:58.000 19:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:58.000 19:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:58.000 19:07:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.000 19:07:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.000 19:07:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.000 19:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:58.000 19:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:58.000 19:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:58.000 19:07:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.000 19:07:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.000 [2024-11-27 19:07:07.521652] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:58.000 19:07:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.000 19:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:58.000 19:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:08:58.000 19:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:58.000 19:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:58.000 19:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:08:58.000 19:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:08:58.000 19:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:58.000 19:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:58.000 19:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:58.000 19:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:58.000 19:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:58.000 19:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:58.000 19:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:58.000 19:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:58.000 19:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:58.259 19:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:58.259 19:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:58.259 19:07:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.259 19:07:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.259 19:07:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.259 19:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:58.259 "name": "Existed_Raid", 00:08:58.259 "uuid": "695955fc-d8e7-4c37-a2dd-ec29f512ac3c", 00:08:58.259 "strip_size_kb": 0, 00:08:58.259 "state": "online", 00:08:58.259 "raid_level": "raid1", 00:08:58.259 "superblock": false, 00:08:58.259 "num_base_bdevs": 2, 00:08:58.259 "num_base_bdevs_discovered": 1, 00:08:58.259 "num_base_bdevs_operational": 1, 00:08:58.259 "base_bdevs_list": [ 00:08:58.259 { 00:08:58.259 "name": null, 00:08:58.259 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:58.259 "is_configured": false, 00:08:58.259 "data_offset": 0, 00:08:58.259 "data_size": 65536 00:08:58.259 }, 00:08:58.259 { 00:08:58.259 "name": "BaseBdev2", 00:08:58.259 "uuid": "ba480043-a6c8-49d6-8693-9dcb13f720b3", 00:08:58.259 "is_configured": true, 00:08:58.259 "data_offset": 0, 00:08:58.259 "data_size": 65536 00:08:58.259 } 00:08:58.259 ] 00:08:58.259 }' 00:08:58.259 19:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:58.259 19:07:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.518 19:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:58.518 19:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:58.518 19:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:58.518 19:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:58.518 19:07:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.518 19:07:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.518 19:07:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.778 19:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:58.778 19:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:58.778 19:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:58.778 19:07:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.778 19:07:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.778 [2024-11-27 19:07:08.175554] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:58.778 [2024-11-27 19:07:08.175735] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:58.778 [2024-11-27 19:07:08.278899] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:58.778 [2024-11-27 19:07:08.278957] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:58.778 [2024-11-27 19:07:08.278972] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:58.778 19:07:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.778 19:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:58.778 19:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:58.778 19:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:58.778 19:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:58.778 19:07:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.778 19:07:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.778 19:07:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.778 19:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:58.778 19:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:58.778 19:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:08:58.778 19:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 62796 00:08:58.778 19:07:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 62796 ']' 00:08:58.778 19:07:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 62796 00:08:58.778 19:07:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:08:58.778 19:07:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:58.778 19:07:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62796 00:08:58.778 19:07:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:58.778 19:07:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:58.778 19:07:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62796' 00:08:58.778 killing process with pid 62796 00:08:58.778 19:07:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 62796 00:08:58.778 [2024-11-27 19:07:08.375600] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:58.778 19:07:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 62796 00:08:58.778 [2024-11-27 19:07:08.393407] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:00.159 19:07:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:09:00.159 00:09:00.159 real 0m5.205s 00:09:00.159 user 0m7.418s 00:09:00.159 sys 0m0.886s 00:09:00.159 ************************************ 00:09:00.159 END TEST raid_state_function_test 00:09:00.159 ************************************ 00:09:00.159 19:07:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:00.159 19:07:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.159 19:07:09 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:09:00.159 19:07:09 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:00.159 19:07:09 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:00.159 19:07:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:00.159 ************************************ 00:09:00.159 START TEST raid_state_function_test_sb 00:09:00.159 ************************************ 00:09:00.159 19:07:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:09:00.159 19:07:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:09:00.159 19:07:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:09:00.159 19:07:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:09:00.159 19:07:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:00.159 19:07:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:00.159 19:07:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:00.159 19:07:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:00.159 19:07:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:00.159 19:07:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:00.159 19:07:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:00.159 19:07:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:00.159 19:07:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:00.159 19:07:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:00.159 19:07:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:00.159 19:07:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:00.159 19:07:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:00.159 19:07:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:00.159 19:07:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:00.159 19:07:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:09:00.159 19:07:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:09:00.159 19:07:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:09:00.159 19:07:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:09:00.159 19:07:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=63045 00:09:00.159 19:07:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:00.159 19:07:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 63045' 00:09:00.159 Process raid pid: 63045 00:09:00.159 19:07:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 63045 00:09:00.159 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:00.159 19:07:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 63045 ']' 00:09:00.159 19:07:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:00.159 19:07:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:00.159 19:07:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:00.159 19:07:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:00.159 19:07:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.159 [2024-11-27 19:07:09.767914] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:09:00.159 [2024-11-27 19:07:09.768122] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:00.418 [2024-11-27 19:07:09.929421] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:00.678 [2024-11-27 19:07:10.070123] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:00.937 [2024-11-27 19:07:10.315028] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:00.937 [2024-11-27 19:07:10.315178] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:01.198 19:07:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:01.198 19:07:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:09:01.198 19:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:01.198 19:07:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.198 19:07:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.198 [2024-11-27 19:07:10.637598] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:01.198 [2024-11-27 19:07:10.637668] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:01.198 [2024-11-27 19:07:10.637680] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:01.198 [2024-11-27 19:07:10.637698] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:01.198 19:07:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.198 19:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:09:01.198 19:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:01.198 19:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:01.198 19:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:01.198 19:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:01.198 19:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:01.198 19:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:01.198 19:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:01.198 19:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:01.198 19:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:01.198 19:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:01.198 19:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:01.198 19:07:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.198 19:07:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.198 19:07:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.198 19:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:01.198 "name": "Existed_Raid", 00:09:01.198 "uuid": "38a46a12-7d8d-450f-a87d-d403fadc4719", 00:09:01.198 "strip_size_kb": 0, 00:09:01.198 "state": "configuring", 00:09:01.198 "raid_level": "raid1", 00:09:01.198 "superblock": true, 00:09:01.198 "num_base_bdevs": 2, 00:09:01.198 "num_base_bdevs_discovered": 0, 00:09:01.198 "num_base_bdevs_operational": 2, 00:09:01.198 "base_bdevs_list": [ 00:09:01.198 { 00:09:01.198 "name": "BaseBdev1", 00:09:01.198 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:01.198 "is_configured": false, 00:09:01.198 "data_offset": 0, 00:09:01.198 "data_size": 0 00:09:01.198 }, 00:09:01.198 { 00:09:01.198 "name": "BaseBdev2", 00:09:01.198 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:01.198 "is_configured": false, 00:09:01.198 "data_offset": 0, 00:09:01.198 "data_size": 0 00:09:01.198 } 00:09:01.198 ] 00:09:01.198 }' 00:09:01.198 19:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:01.198 19:07:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.773 19:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:01.773 19:07:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.773 19:07:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.773 [2024-11-27 19:07:11.156656] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:01.773 [2024-11-27 19:07:11.156772] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:01.773 19:07:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.773 19:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:01.773 19:07:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.773 19:07:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.773 [2024-11-27 19:07:11.168628] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:01.773 [2024-11-27 19:07:11.168730] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:01.773 [2024-11-27 19:07:11.168763] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:01.773 [2024-11-27 19:07:11.168791] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:01.773 19:07:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.773 19:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:01.773 19:07:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.773 19:07:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.773 [2024-11-27 19:07:11.222641] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:01.773 BaseBdev1 00:09:01.773 19:07:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.773 19:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:01.773 19:07:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:01.773 19:07:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:01.773 19:07:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:01.773 19:07:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:01.773 19:07:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:01.773 19:07:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:01.773 19:07:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.773 19:07:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.773 19:07:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.773 19:07:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:01.773 19:07:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.773 19:07:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.773 [ 00:09:01.773 { 00:09:01.773 "name": "BaseBdev1", 00:09:01.773 "aliases": [ 00:09:01.773 "c0ace12b-2800-40b3-a0ec-793aa6e55536" 00:09:01.773 ], 00:09:01.773 "product_name": "Malloc disk", 00:09:01.773 "block_size": 512, 00:09:01.773 "num_blocks": 65536, 00:09:01.773 "uuid": "c0ace12b-2800-40b3-a0ec-793aa6e55536", 00:09:01.773 "assigned_rate_limits": { 00:09:01.773 "rw_ios_per_sec": 0, 00:09:01.773 "rw_mbytes_per_sec": 0, 00:09:01.773 "r_mbytes_per_sec": 0, 00:09:01.773 "w_mbytes_per_sec": 0 00:09:01.773 }, 00:09:01.773 "claimed": true, 00:09:01.773 "claim_type": "exclusive_write", 00:09:01.774 "zoned": false, 00:09:01.774 "supported_io_types": { 00:09:01.774 "read": true, 00:09:01.774 "write": true, 00:09:01.774 "unmap": true, 00:09:01.774 "flush": true, 00:09:01.774 "reset": true, 00:09:01.774 "nvme_admin": false, 00:09:01.774 "nvme_io": false, 00:09:01.774 "nvme_io_md": false, 00:09:01.774 "write_zeroes": true, 00:09:01.774 "zcopy": true, 00:09:01.774 "get_zone_info": false, 00:09:01.774 "zone_management": false, 00:09:01.774 "zone_append": false, 00:09:01.774 "compare": false, 00:09:01.774 "compare_and_write": false, 00:09:01.774 "abort": true, 00:09:01.774 "seek_hole": false, 00:09:01.774 "seek_data": false, 00:09:01.774 "copy": true, 00:09:01.774 "nvme_iov_md": false 00:09:01.774 }, 00:09:01.774 "memory_domains": [ 00:09:01.774 { 00:09:01.774 "dma_device_id": "system", 00:09:01.774 "dma_device_type": 1 00:09:01.774 }, 00:09:01.774 { 00:09:01.774 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:01.774 "dma_device_type": 2 00:09:01.774 } 00:09:01.774 ], 00:09:01.774 "driver_specific": {} 00:09:01.774 } 00:09:01.774 ] 00:09:01.774 19:07:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.774 19:07:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:01.774 19:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:09:01.774 19:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:01.774 19:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:01.774 19:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:01.774 19:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:01.774 19:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:01.774 19:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:01.774 19:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:01.774 19:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:01.774 19:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:01.774 19:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:01.774 19:07:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.774 19:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:01.774 19:07:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.774 19:07:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.774 19:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:01.774 "name": "Existed_Raid", 00:09:01.774 "uuid": "d7ade8c5-248a-47d7-86b6-91c19b1ef579", 00:09:01.774 "strip_size_kb": 0, 00:09:01.774 "state": "configuring", 00:09:01.774 "raid_level": "raid1", 00:09:01.774 "superblock": true, 00:09:01.774 "num_base_bdevs": 2, 00:09:01.774 "num_base_bdevs_discovered": 1, 00:09:01.774 "num_base_bdevs_operational": 2, 00:09:01.774 "base_bdevs_list": [ 00:09:01.774 { 00:09:01.774 "name": "BaseBdev1", 00:09:01.774 "uuid": "c0ace12b-2800-40b3-a0ec-793aa6e55536", 00:09:01.774 "is_configured": true, 00:09:01.774 "data_offset": 2048, 00:09:01.774 "data_size": 63488 00:09:01.774 }, 00:09:01.774 { 00:09:01.774 "name": "BaseBdev2", 00:09:01.774 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:01.774 "is_configured": false, 00:09:01.774 "data_offset": 0, 00:09:01.774 "data_size": 0 00:09:01.774 } 00:09:01.774 ] 00:09:01.774 }' 00:09:01.774 19:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:01.774 19:07:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:02.343 19:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:02.343 19:07:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.343 19:07:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:02.343 [2024-11-27 19:07:11.673911] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:02.343 [2024-11-27 19:07:11.673973] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:02.343 19:07:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.343 19:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:02.343 19:07:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.343 19:07:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:02.343 [2024-11-27 19:07:11.681939] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:02.343 [2024-11-27 19:07:11.684233] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:02.343 [2024-11-27 19:07:11.684331] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:02.343 19:07:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.343 19:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:02.343 19:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:02.343 19:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:09:02.343 19:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:02.343 19:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:02.343 19:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:02.343 19:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:02.343 19:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:02.343 19:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:02.343 19:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:02.343 19:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:02.343 19:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:02.343 19:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:02.343 19:07:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.343 19:07:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:02.343 19:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:02.343 19:07:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.344 19:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:02.344 "name": "Existed_Raid", 00:09:02.344 "uuid": "5b881ef2-3ec6-47c0-9aa9-52228a4410b2", 00:09:02.344 "strip_size_kb": 0, 00:09:02.344 "state": "configuring", 00:09:02.344 "raid_level": "raid1", 00:09:02.344 "superblock": true, 00:09:02.344 "num_base_bdevs": 2, 00:09:02.344 "num_base_bdevs_discovered": 1, 00:09:02.344 "num_base_bdevs_operational": 2, 00:09:02.344 "base_bdevs_list": [ 00:09:02.344 { 00:09:02.344 "name": "BaseBdev1", 00:09:02.344 "uuid": "c0ace12b-2800-40b3-a0ec-793aa6e55536", 00:09:02.344 "is_configured": true, 00:09:02.344 "data_offset": 2048, 00:09:02.344 "data_size": 63488 00:09:02.344 }, 00:09:02.344 { 00:09:02.344 "name": "BaseBdev2", 00:09:02.344 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:02.344 "is_configured": false, 00:09:02.344 "data_offset": 0, 00:09:02.344 "data_size": 0 00:09:02.344 } 00:09:02.344 ] 00:09:02.344 }' 00:09:02.344 19:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:02.344 19:07:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:02.603 19:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:02.603 19:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.603 19:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:02.603 [2024-11-27 19:07:12.153632] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:02.603 [2024-11-27 19:07:12.153981] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:02.603 [2024-11-27 19:07:12.153999] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:02.603 BaseBdev2 00:09:02.603 [2024-11-27 19:07:12.154473] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:02.603 [2024-11-27 19:07:12.154670] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:02.603 [2024-11-27 19:07:12.154687] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:02.603 [2024-11-27 19:07:12.154878] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:02.603 19:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.603 19:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:02.603 19:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:02.603 19:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:02.603 19:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:02.603 19:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:02.603 19:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:02.603 19:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:02.603 19:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.603 19:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:02.603 19:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.604 19:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:02.604 19:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.604 19:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:02.604 [ 00:09:02.604 { 00:09:02.604 "name": "BaseBdev2", 00:09:02.604 "aliases": [ 00:09:02.604 "960197ca-e7e5-4371-81f4-c4a8969ac726" 00:09:02.604 ], 00:09:02.604 "product_name": "Malloc disk", 00:09:02.604 "block_size": 512, 00:09:02.604 "num_blocks": 65536, 00:09:02.604 "uuid": "960197ca-e7e5-4371-81f4-c4a8969ac726", 00:09:02.604 "assigned_rate_limits": { 00:09:02.604 "rw_ios_per_sec": 0, 00:09:02.604 "rw_mbytes_per_sec": 0, 00:09:02.604 "r_mbytes_per_sec": 0, 00:09:02.604 "w_mbytes_per_sec": 0 00:09:02.604 }, 00:09:02.604 "claimed": true, 00:09:02.604 "claim_type": "exclusive_write", 00:09:02.604 "zoned": false, 00:09:02.604 "supported_io_types": { 00:09:02.604 "read": true, 00:09:02.604 "write": true, 00:09:02.604 "unmap": true, 00:09:02.604 "flush": true, 00:09:02.604 "reset": true, 00:09:02.604 "nvme_admin": false, 00:09:02.604 "nvme_io": false, 00:09:02.604 "nvme_io_md": false, 00:09:02.604 "write_zeroes": true, 00:09:02.604 "zcopy": true, 00:09:02.604 "get_zone_info": false, 00:09:02.604 "zone_management": false, 00:09:02.604 "zone_append": false, 00:09:02.604 "compare": false, 00:09:02.604 "compare_and_write": false, 00:09:02.604 "abort": true, 00:09:02.604 "seek_hole": false, 00:09:02.604 "seek_data": false, 00:09:02.604 "copy": true, 00:09:02.604 "nvme_iov_md": false 00:09:02.604 }, 00:09:02.604 "memory_domains": [ 00:09:02.604 { 00:09:02.604 "dma_device_id": "system", 00:09:02.604 "dma_device_type": 1 00:09:02.604 }, 00:09:02.604 { 00:09:02.604 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:02.604 "dma_device_type": 2 00:09:02.604 } 00:09:02.604 ], 00:09:02.604 "driver_specific": {} 00:09:02.604 } 00:09:02.604 ] 00:09:02.604 19:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.604 19:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:02.604 19:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:02.604 19:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:02.604 19:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:09:02.604 19:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:02.604 19:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:02.604 19:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:02.604 19:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:02.604 19:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:02.604 19:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:02.604 19:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:02.604 19:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:02.604 19:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:02.604 19:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:02.604 19:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:02.604 19:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.604 19:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:02.604 19:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.863 19:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:02.863 "name": "Existed_Raid", 00:09:02.863 "uuid": "5b881ef2-3ec6-47c0-9aa9-52228a4410b2", 00:09:02.863 "strip_size_kb": 0, 00:09:02.863 "state": "online", 00:09:02.863 "raid_level": "raid1", 00:09:02.863 "superblock": true, 00:09:02.863 "num_base_bdevs": 2, 00:09:02.863 "num_base_bdevs_discovered": 2, 00:09:02.863 "num_base_bdevs_operational": 2, 00:09:02.863 "base_bdevs_list": [ 00:09:02.863 { 00:09:02.863 "name": "BaseBdev1", 00:09:02.863 "uuid": "c0ace12b-2800-40b3-a0ec-793aa6e55536", 00:09:02.863 "is_configured": true, 00:09:02.863 "data_offset": 2048, 00:09:02.863 "data_size": 63488 00:09:02.863 }, 00:09:02.863 { 00:09:02.863 "name": "BaseBdev2", 00:09:02.863 "uuid": "960197ca-e7e5-4371-81f4-c4a8969ac726", 00:09:02.863 "is_configured": true, 00:09:02.864 "data_offset": 2048, 00:09:02.864 "data_size": 63488 00:09:02.864 } 00:09:02.864 ] 00:09:02.864 }' 00:09:02.864 19:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:02.864 19:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.124 19:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:03.124 19:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:03.124 19:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:03.124 19:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:03.124 19:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:03.124 19:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:03.124 19:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:03.124 19:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:03.124 19:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.124 19:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.124 [2024-11-27 19:07:12.661135] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:03.124 19:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.124 19:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:03.124 "name": "Existed_Raid", 00:09:03.124 "aliases": [ 00:09:03.124 "5b881ef2-3ec6-47c0-9aa9-52228a4410b2" 00:09:03.124 ], 00:09:03.124 "product_name": "Raid Volume", 00:09:03.124 "block_size": 512, 00:09:03.124 "num_blocks": 63488, 00:09:03.124 "uuid": "5b881ef2-3ec6-47c0-9aa9-52228a4410b2", 00:09:03.124 "assigned_rate_limits": { 00:09:03.124 "rw_ios_per_sec": 0, 00:09:03.124 "rw_mbytes_per_sec": 0, 00:09:03.124 "r_mbytes_per_sec": 0, 00:09:03.124 "w_mbytes_per_sec": 0 00:09:03.124 }, 00:09:03.124 "claimed": false, 00:09:03.124 "zoned": false, 00:09:03.124 "supported_io_types": { 00:09:03.124 "read": true, 00:09:03.124 "write": true, 00:09:03.124 "unmap": false, 00:09:03.124 "flush": false, 00:09:03.124 "reset": true, 00:09:03.124 "nvme_admin": false, 00:09:03.124 "nvme_io": false, 00:09:03.124 "nvme_io_md": false, 00:09:03.124 "write_zeroes": true, 00:09:03.124 "zcopy": false, 00:09:03.124 "get_zone_info": false, 00:09:03.124 "zone_management": false, 00:09:03.124 "zone_append": false, 00:09:03.124 "compare": false, 00:09:03.124 "compare_and_write": false, 00:09:03.124 "abort": false, 00:09:03.124 "seek_hole": false, 00:09:03.124 "seek_data": false, 00:09:03.124 "copy": false, 00:09:03.124 "nvme_iov_md": false 00:09:03.124 }, 00:09:03.124 "memory_domains": [ 00:09:03.124 { 00:09:03.124 "dma_device_id": "system", 00:09:03.124 "dma_device_type": 1 00:09:03.124 }, 00:09:03.124 { 00:09:03.124 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:03.124 "dma_device_type": 2 00:09:03.124 }, 00:09:03.124 { 00:09:03.124 "dma_device_id": "system", 00:09:03.124 "dma_device_type": 1 00:09:03.124 }, 00:09:03.124 { 00:09:03.124 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:03.124 "dma_device_type": 2 00:09:03.124 } 00:09:03.124 ], 00:09:03.124 "driver_specific": { 00:09:03.124 "raid": { 00:09:03.124 "uuid": "5b881ef2-3ec6-47c0-9aa9-52228a4410b2", 00:09:03.124 "strip_size_kb": 0, 00:09:03.124 "state": "online", 00:09:03.124 "raid_level": "raid1", 00:09:03.124 "superblock": true, 00:09:03.124 "num_base_bdevs": 2, 00:09:03.124 "num_base_bdevs_discovered": 2, 00:09:03.124 "num_base_bdevs_operational": 2, 00:09:03.124 "base_bdevs_list": [ 00:09:03.124 { 00:09:03.124 "name": "BaseBdev1", 00:09:03.124 "uuid": "c0ace12b-2800-40b3-a0ec-793aa6e55536", 00:09:03.124 "is_configured": true, 00:09:03.124 "data_offset": 2048, 00:09:03.124 "data_size": 63488 00:09:03.124 }, 00:09:03.124 { 00:09:03.124 "name": "BaseBdev2", 00:09:03.124 "uuid": "960197ca-e7e5-4371-81f4-c4a8969ac726", 00:09:03.124 "is_configured": true, 00:09:03.124 "data_offset": 2048, 00:09:03.124 "data_size": 63488 00:09:03.124 } 00:09:03.124 ] 00:09:03.124 } 00:09:03.124 } 00:09:03.124 }' 00:09:03.124 19:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:03.124 19:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:03.124 BaseBdev2' 00:09:03.124 19:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:03.384 19:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:03.384 19:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:03.384 19:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:03.384 19:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:03.384 19:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.384 19:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.384 19:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.384 19:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:03.384 19:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:03.384 19:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:03.384 19:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:03.384 19:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.384 19:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.384 19:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:03.384 19:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.384 19:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:03.384 19:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:03.384 19:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:03.384 19:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.384 19:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.384 [2024-11-27 19:07:12.868512] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:03.384 19:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.384 19:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:03.384 19:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:09:03.384 19:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:03.384 19:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:09:03.384 19:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:09:03.384 19:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:09:03.384 19:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:03.384 19:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:03.384 19:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:03.384 19:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:03.384 19:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:09:03.384 19:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:03.384 19:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:03.384 19:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:03.385 19:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:03.385 19:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:03.385 19:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:03.385 19:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.385 19:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.385 19:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.644 19:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:03.644 "name": "Existed_Raid", 00:09:03.644 "uuid": "5b881ef2-3ec6-47c0-9aa9-52228a4410b2", 00:09:03.644 "strip_size_kb": 0, 00:09:03.644 "state": "online", 00:09:03.644 "raid_level": "raid1", 00:09:03.644 "superblock": true, 00:09:03.644 "num_base_bdevs": 2, 00:09:03.644 "num_base_bdevs_discovered": 1, 00:09:03.644 "num_base_bdevs_operational": 1, 00:09:03.644 "base_bdevs_list": [ 00:09:03.644 { 00:09:03.644 "name": null, 00:09:03.644 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:03.644 "is_configured": false, 00:09:03.644 "data_offset": 0, 00:09:03.644 "data_size": 63488 00:09:03.644 }, 00:09:03.644 { 00:09:03.644 "name": "BaseBdev2", 00:09:03.644 "uuid": "960197ca-e7e5-4371-81f4-c4a8969ac726", 00:09:03.644 "is_configured": true, 00:09:03.644 "data_offset": 2048, 00:09:03.644 "data_size": 63488 00:09:03.644 } 00:09:03.644 ] 00:09:03.644 }' 00:09:03.644 19:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:03.644 19:07:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.904 19:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:03.904 19:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:03.904 19:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:03.904 19:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:03.904 19:07:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.904 19:07:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.904 19:07:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.904 19:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:03.904 19:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:03.904 19:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:03.904 19:07:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.904 19:07:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.904 [2024-11-27 19:07:13.469781] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:03.904 [2024-11-27 19:07:13.469904] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:04.163 [2024-11-27 19:07:13.574509] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:04.163 [2024-11-27 19:07:13.574571] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:04.163 [2024-11-27 19:07:13.574585] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:04.163 19:07:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.163 19:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:04.163 19:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:04.163 19:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:04.163 19:07:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.163 19:07:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.163 19:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:04.163 19:07:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.163 19:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:04.163 19:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:04.163 19:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:09:04.163 19:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 63045 00:09:04.163 19:07:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 63045 ']' 00:09:04.163 19:07:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 63045 00:09:04.163 19:07:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:09:04.163 19:07:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:04.163 19:07:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63045 00:09:04.163 killing process with pid 63045 00:09:04.163 19:07:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:04.163 19:07:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:04.163 19:07:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63045' 00:09:04.163 19:07:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 63045 00:09:04.164 [2024-11-27 19:07:13.672967] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:04.164 19:07:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 63045 00:09:04.164 [2024-11-27 19:07:13.690693] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:05.542 19:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:09:05.542 00:09:05.542 real 0m5.249s 00:09:05.542 user 0m7.382s 00:09:05.542 sys 0m0.957s 00:09:05.542 19:07:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:05.542 ************************************ 00:09:05.542 END TEST raid_state_function_test_sb 00:09:05.542 ************************************ 00:09:05.542 19:07:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.542 19:07:14 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:09:05.542 19:07:14 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:05.542 19:07:14 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:05.542 19:07:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:05.542 ************************************ 00:09:05.542 START TEST raid_superblock_test 00:09:05.542 ************************************ 00:09:05.542 19:07:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:09:05.542 19:07:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:09:05.542 19:07:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:09:05.542 19:07:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:09:05.542 19:07:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:09:05.542 19:07:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:09:05.542 19:07:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:09:05.543 19:07:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:09:05.543 19:07:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:09:05.543 19:07:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:09:05.543 19:07:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:09:05.543 19:07:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:09:05.543 19:07:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:09:05.543 19:07:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:09:05.543 19:07:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:09:05.543 19:07:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:09:05.543 19:07:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=63296 00:09:05.543 19:07:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:09:05.543 19:07:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 63296 00:09:05.543 19:07:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 63296 ']' 00:09:05.543 19:07:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:05.543 19:07:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:05.543 19:07:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:05.543 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:05.543 19:07:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:05.543 19:07:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.543 [2024-11-27 19:07:15.096740] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:09:05.543 [2024-11-27 19:07:15.096963] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63296 ] 00:09:05.802 [2024-11-27 19:07:15.266672] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:05.802 [2024-11-27 19:07:15.404899] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:06.061 [2024-11-27 19:07:15.642887] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:06.061 [2024-11-27 19:07:15.643074] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:06.320 19:07:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:06.320 19:07:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:09:06.320 19:07:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:09:06.320 19:07:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:06.320 19:07:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:09:06.320 19:07:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:09:06.320 19:07:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:06.320 19:07:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:06.320 19:07:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:06.320 19:07:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:06.320 19:07:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:09:06.320 19:07:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.320 19:07:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.580 malloc1 00:09:06.580 19:07:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.580 19:07:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:06.580 19:07:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.580 19:07:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.580 [2024-11-27 19:07:15.985721] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:06.580 [2024-11-27 19:07:15.985786] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:06.580 [2024-11-27 19:07:15.985813] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:06.580 [2024-11-27 19:07:15.985822] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:06.580 [2024-11-27 19:07:15.988332] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:06.580 [2024-11-27 19:07:15.988372] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:06.580 pt1 00:09:06.580 19:07:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.580 19:07:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:06.580 19:07:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:06.580 19:07:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:09:06.580 19:07:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:09:06.580 19:07:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:06.580 19:07:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:06.580 19:07:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:06.580 19:07:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:06.580 19:07:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:09:06.580 19:07:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.580 19:07:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.580 malloc2 00:09:06.580 19:07:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.580 19:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:06.580 19:07:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.580 19:07:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.580 [2024-11-27 19:07:16.044637] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:06.580 [2024-11-27 19:07:16.044772] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:06.580 [2024-11-27 19:07:16.044836] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:09:06.580 [2024-11-27 19:07:16.044871] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:06.580 [2024-11-27 19:07:16.047265] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:06.580 [2024-11-27 19:07:16.047338] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:06.580 pt2 00:09:06.580 19:07:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.580 19:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:06.580 19:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:06.580 19:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:09:06.580 19:07:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.580 19:07:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.580 [2024-11-27 19:07:16.056670] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:06.580 [2024-11-27 19:07:16.058786] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:06.580 [2024-11-27 19:07:16.059027] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:09:06.580 [2024-11-27 19:07:16.059079] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:06.580 [2024-11-27 19:07:16.059362] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:06.580 [2024-11-27 19:07:16.059583] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:09:06.580 [2024-11-27 19:07:16.059633] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:09:06.580 [2024-11-27 19:07:16.059830] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:06.580 19:07:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.580 19:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:06.580 19:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:06.580 19:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:06.580 19:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:06.580 19:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:06.580 19:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:06.580 19:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:06.580 19:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:06.580 19:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:06.580 19:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:06.580 19:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:06.580 19:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:06.580 19:07:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.580 19:07:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.580 19:07:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.580 19:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:06.580 "name": "raid_bdev1", 00:09:06.580 "uuid": "9ba77285-38de-47a9-8ac5-b1d74e94716d", 00:09:06.580 "strip_size_kb": 0, 00:09:06.580 "state": "online", 00:09:06.580 "raid_level": "raid1", 00:09:06.580 "superblock": true, 00:09:06.580 "num_base_bdevs": 2, 00:09:06.580 "num_base_bdevs_discovered": 2, 00:09:06.580 "num_base_bdevs_operational": 2, 00:09:06.580 "base_bdevs_list": [ 00:09:06.580 { 00:09:06.580 "name": "pt1", 00:09:06.580 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:06.580 "is_configured": true, 00:09:06.580 "data_offset": 2048, 00:09:06.580 "data_size": 63488 00:09:06.580 }, 00:09:06.581 { 00:09:06.581 "name": "pt2", 00:09:06.581 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:06.581 "is_configured": true, 00:09:06.581 "data_offset": 2048, 00:09:06.581 "data_size": 63488 00:09:06.581 } 00:09:06.581 ] 00:09:06.581 }' 00:09:06.581 19:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:06.581 19:07:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.149 19:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:09:07.149 19:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:07.149 19:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:07.149 19:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:07.149 19:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:07.149 19:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:07.149 19:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:07.149 19:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:07.149 19:07:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.149 19:07:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.149 [2024-11-27 19:07:16.496170] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:07.149 19:07:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.149 19:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:07.149 "name": "raid_bdev1", 00:09:07.149 "aliases": [ 00:09:07.149 "9ba77285-38de-47a9-8ac5-b1d74e94716d" 00:09:07.149 ], 00:09:07.149 "product_name": "Raid Volume", 00:09:07.149 "block_size": 512, 00:09:07.149 "num_blocks": 63488, 00:09:07.149 "uuid": "9ba77285-38de-47a9-8ac5-b1d74e94716d", 00:09:07.149 "assigned_rate_limits": { 00:09:07.149 "rw_ios_per_sec": 0, 00:09:07.149 "rw_mbytes_per_sec": 0, 00:09:07.149 "r_mbytes_per_sec": 0, 00:09:07.149 "w_mbytes_per_sec": 0 00:09:07.149 }, 00:09:07.149 "claimed": false, 00:09:07.149 "zoned": false, 00:09:07.149 "supported_io_types": { 00:09:07.149 "read": true, 00:09:07.149 "write": true, 00:09:07.149 "unmap": false, 00:09:07.149 "flush": false, 00:09:07.149 "reset": true, 00:09:07.149 "nvme_admin": false, 00:09:07.149 "nvme_io": false, 00:09:07.149 "nvme_io_md": false, 00:09:07.149 "write_zeroes": true, 00:09:07.149 "zcopy": false, 00:09:07.149 "get_zone_info": false, 00:09:07.149 "zone_management": false, 00:09:07.149 "zone_append": false, 00:09:07.149 "compare": false, 00:09:07.149 "compare_and_write": false, 00:09:07.149 "abort": false, 00:09:07.149 "seek_hole": false, 00:09:07.149 "seek_data": false, 00:09:07.149 "copy": false, 00:09:07.149 "nvme_iov_md": false 00:09:07.149 }, 00:09:07.149 "memory_domains": [ 00:09:07.149 { 00:09:07.149 "dma_device_id": "system", 00:09:07.149 "dma_device_type": 1 00:09:07.149 }, 00:09:07.149 { 00:09:07.149 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:07.149 "dma_device_type": 2 00:09:07.149 }, 00:09:07.149 { 00:09:07.149 "dma_device_id": "system", 00:09:07.149 "dma_device_type": 1 00:09:07.149 }, 00:09:07.149 { 00:09:07.149 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:07.149 "dma_device_type": 2 00:09:07.149 } 00:09:07.149 ], 00:09:07.149 "driver_specific": { 00:09:07.149 "raid": { 00:09:07.149 "uuid": "9ba77285-38de-47a9-8ac5-b1d74e94716d", 00:09:07.149 "strip_size_kb": 0, 00:09:07.149 "state": "online", 00:09:07.149 "raid_level": "raid1", 00:09:07.149 "superblock": true, 00:09:07.149 "num_base_bdevs": 2, 00:09:07.149 "num_base_bdevs_discovered": 2, 00:09:07.149 "num_base_bdevs_operational": 2, 00:09:07.149 "base_bdevs_list": [ 00:09:07.149 { 00:09:07.149 "name": "pt1", 00:09:07.149 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:07.149 "is_configured": true, 00:09:07.149 "data_offset": 2048, 00:09:07.149 "data_size": 63488 00:09:07.149 }, 00:09:07.149 { 00:09:07.149 "name": "pt2", 00:09:07.149 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:07.149 "is_configured": true, 00:09:07.149 "data_offset": 2048, 00:09:07.149 "data_size": 63488 00:09:07.149 } 00:09:07.149 ] 00:09:07.149 } 00:09:07.149 } 00:09:07.149 }' 00:09:07.149 19:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:07.149 19:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:07.149 pt2' 00:09:07.149 19:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:07.149 19:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:07.149 19:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:07.149 19:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:07.149 19:07:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.149 19:07:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.149 19:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:07.149 19:07:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.149 19:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:07.149 19:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:07.149 19:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:07.149 19:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:07.149 19:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:07.149 19:07:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.149 19:07:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.149 19:07:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.149 19:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:07.149 19:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:07.149 19:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:07.149 19:07:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.149 19:07:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.149 19:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:09:07.149 [2024-11-27 19:07:16.715784] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:07.149 19:07:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.149 19:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=9ba77285-38de-47a9-8ac5-b1d74e94716d 00:09:07.149 19:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 9ba77285-38de-47a9-8ac5-b1d74e94716d ']' 00:09:07.149 19:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:07.149 19:07:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.149 19:07:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.149 [2024-11-27 19:07:16.763385] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:07.149 [2024-11-27 19:07:16.763454] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:07.149 [2024-11-27 19:07:16.763571] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:07.149 [2024-11-27 19:07:16.763665] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:07.149 [2024-11-27 19:07:16.763738] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:09:07.149 19:07:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.149 19:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:07.149 19:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:09:07.149 19:07:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.149 19:07:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.409 19:07:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.409 19:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:09:07.409 19:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:09:07.409 19:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:07.409 19:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:09:07.409 19:07:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.409 19:07:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.409 19:07:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.409 19:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:07.409 19:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:09:07.409 19:07:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.409 19:07:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.409 19:07:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.409 19:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:07.409 19:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:09:07.409 19:07:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.409 19:07:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.409 19:07:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.409 19:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:09:07.409 19:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:09:07.409 19:07:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:09:07.409 19:07:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:09:07.409 19:07:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:09:07.409 19:07:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:07.409 19:07:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:09:07.409 19:07:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:07.409 19:07:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:09:07.409 19:07:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.409 19:07:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.409 [2024-11-27 19:07:16.903162] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:07.409 [2024-11-27 19:07:16.905364] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:07.409 [2024-11-27 19:07:16.905487] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:07.409 [2024-11-27 19:07:16.905559] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:07.409 [2024-11-27 19:07:16.905575] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:07.409 [2024-11-27 19:07:16.905585] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:09:07.409 request: 00:09:07.409 { 00:09:07.409 "name": "raid_bdev1", 00:09:07.409 "raid_level": "raid1", 00:09:07.409 "base_bdevs": [ 00:09:07.409 "malloc1", 00:09:07.409 "malloc2" 00:09:07.409 ], 00:09:07.409 "superblock": false, 00:09:07.409 "method": "bdev_raid_create", 00:09:07.409 "req_id": 1 00:09:07.409 } 00:09:07.409 Got JSON-RPC error response 00:09:07.409 response: 00:09:07.409 { 00:09:07.409 "code": -17, 00:09:07.409 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:07.409 } 00:09:07.409 19:07:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:07.410 19:07:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:09:07.410 19:07:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:07.410 19:07:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:07.410 19:07:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:07.410 19:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:07.410 19:07:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.410 19:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:09:07.410 19:07:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.410 19:07:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.410 19:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:09:07.410 19:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:09:07.410 19:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:07.410 19:07:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.410 19:07:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.410 [2024-11-27 19:07:16.967035] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:07.410 [2024-11-27 19:07:16.967126] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:07.410 [2024-11-27 19:07:16.967172] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:09:07.410 [2024-11-27 19:07:16.967204] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:07.410 [2024-11-27 19:07:16.969682] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:07.410 [2024-11-27 19:07:16.969771] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:07.410 [2024-11-27 19:07:16.969876] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:07.410 [2024-11-27 19:07:16.969966] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:07.410 pt1 00:09:07.410 19:07:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.410 19:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:09:07.410 19:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:07.410 19:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:07.410 19:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:07.410 19:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:07.410 19:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:07.410 19:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:07.410 19:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:07.410 19:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:07.410 19:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:07.410 19:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:07.410 19:07:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:07.410 19:07:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.410 19:07:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.410 19:07:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.410 19:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:07.410 "name": "raid_bdev1", 00:09:07.410 "uuid": "9ba77285-38de-47a9-8ac5-b1d74e94716d", 00:09:07.410 "strip_size_kb": 0, 00:09:07.410 "state": "configuring", 00:09:07.410 "raid_level": "raid1", 00:09:07.410 "superblock": true, 00:09:07.410 "num_base_bdevs": 2, 00:09:07.410 "num_base_bdevs_discovered": 1, 00:09:07.410 "num_base_bdevs_operational": 2, 00:09:07.410 "base_bdevs_list": [ 00:09:07.410 { 00:09:07.410 "name": "pt1", 00:09:07.410 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:07.410 "is_configured": true, 00:09:07.410 "data_offset": 2048, 00:09:07.410 "data_size": 63488 00:09:07.410 }, 00:09:07.410 { 00:09:07.410 "name": null, 00:09:07.410 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:07.410 "is_configured": false, 00:09:07.410 "data_offset": 2048, 00:09:07.410 "data_size": 63488 00:09:07.410 } 00:09:07.410 ] 00:09:07.410 }' 00:09:07.410 19:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:07.410 19:07:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.978 19:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:09:07.978 19:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:09:07.978 19:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:07.978 19:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:07.978 19:07:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.978 19:07:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.978 [2024-11-27 19:07:17.462253] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:07.978 [2024-11-27 19:07:17.462409] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:07.978 [2024-11-27 19:07:17.462441] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:09:07.978 [2024-11-27 19:07:17.462453] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:07.978 [2024-11-27 19:07:17.463010] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:07.978 [2024-11-27 19:07:17.463046] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:07.978 [2024-11-27 19:07:17.463151] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:07.978 [2024-11-27 19:07:17.463182] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:07.978 [2024-11-27 19:07:17.463313] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:07.978 [2024-11-27 19:07:17.463330] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:07.979 [2024-11-27 19:07:17.463605] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:07.979 [2024-11-27 19:07:17.463785] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:07.979 [2024-11-27 19:07:17.463795] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:07.979 [2024-11-27 19:07:17.463944] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:07.979 pt2 00:09:07.979 19:07:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.979 19:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:07.979 19:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:07.979 19:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:07.979 19:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:07.979 19:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:07.979 19:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:07.979 19:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:07.979 19:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:07.979 19:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:07.979 19:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:07.979 19:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:07.979 19:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:07.979 19:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:07.979 19:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:07.979 19:07:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.979 19:07:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.979 19:07:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.979 19:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:07.979 "name": "raid_bdev1", 00:09:07.979 "uuid": "9ba77285-38de-47a9-8ac5-b1d74e94716d", 00:09:07.979 "strip_size_kb": 0, 00:09:07.979 "state": "online", 00:09:07.979 "raid_level": "raid1", 00:09:07.979 "superblock": true, 00:09:07.979 "num_base_bdevs": 2, 00:09:07.979 "num_base_bdevs_discovered": 2, 00:09:07.979 "num_base_bdevs_operational": 2, 00:09:07.979 "base_bdevs_list": [ 00:09:07.979 { 00:09:07.979 "name": "pt1", 00:09:07.979 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:07.979 "is_configured": true, 00:09:07.979 "data_offset": 2048, 00:09:07.979 "data_size": 63488 00:09:07.979 }, 00:09:07.979 { 00:09:07.979 "name": "pt2", 00:09:07.979 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:07.979 "is_configured": true, 00:09:07.979 "data_offset": 2048, 00:09:07.979 "data_size": 63488 00:09:07.979 } 00:09:07.979 ] 00:09:07.979 }' 00:09:07.979 19:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:07.979 19:07:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.238 19:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:09:08.238 19:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:08.238 19:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:08.238 19:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:08.238 19:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:08.238 19:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:08.238 19:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:08.238 19:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:08.238 19:07:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.238 19:07:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.497 [2024-11-27 19:07:17.873790] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:08.497 19:07:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.497 19:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:08.497 "name": "raid_bdev1", 00:09:08.497 "aliases": [ 00:09:08.497 "9ba77285-38de-47a9-8ac5-b1d74e94716d" 00:09:08.497 ], 00:09:08.497 "product_name": "Raid Volume", 00:09:08.497 "block_size": 512, 00:09:08.497 "num_blocks": 63488, 00:09:08.497 "uuid": "9ba77285-38de-47a9-8ac5-b1d74e94716d", 00:09:08.497 "assigned_rate_limits": { 00:09:08.497 "rw_ios_per_sec": 0, 00:09:08.497 "rw_mbytes_per_sec": 0, 00:09:08.497 "r_mbytes_per_sec": 0, 00:09:08.497 "w_mbytes_per_sec": 0 00:09:08.497 }, 00:09:08.497 "claimed": false, 00:09:08.497 "zoned": false, 00:09:08.497 "supported_io_types": { 00:09:08.497 "read": true, 00:09:08.497 "write": true, 00:09:08.497 "unmap": false, 00:09:08.497 "flush": false, 00:09:08.497 "reset": true, 00:09:08.497 "nvme_admin": false, 00:09:08.497 "nvme_io": false, 00:09:08.497 "nvme_io_md": false, 00:09:08.497 "write_zeroes": true, 00:09:08.497 "zcopy": false, 00:09:08.497 "get_zone_info": false, 00:09:08.497 "zone_management": false, 00:09:08.497 "zone_append": false, 00:09:08.497 "compare": false, 00:09:08.497 "compare_and_write": false, 00:09:08.497 "abort": false, 00:09:08.497 "seek_hole": false, 00:09:08.497 "seek_data": false, 00:09:08.497 "copy": false, 00:09:08.497 "nvme_iov_md": false 00:09:08.497 }, 00:09:08.497 "memory_domains": [ 00:09:08.497 { 00:09:08.497 "dma_device_id": "system", 00:09:08.497 "dma_device_type": 1 00:09:08.497 }, 00:09:08.497 { 00:09:08.497 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:08.497 "dma_device_type": 2 00:09:08.497 }, 00:09:08.497 { 00:09:08.497 "dma_device_id": "system", 00:09:08.497 "dma_device_type": 1 00:09:08.497 }, 00:09:08.497 { 00:09:08.497 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:08.497 "dma_device_type": 2 00:09:08.497 } 00:09:08.497 ], 00:09:08.497 "driver_specific": { 00:09:08.497 "raid": { 00:09:08.497 "uuid": "9ba77285-38de-47a9-8ac5-b1d74e94716d", 00:09:08.497 "strip_size_kb": 0, 00:09:08.497 "state": "online", 00:09:08.497 "raid_level": "raid1", 00:09:08.497 "superblock": true, 00:09:08.497 "num_base_bdevs": 2, 00:09:08.497 "num_base_bdevs_discovered": 2, 00:09:08.497 "num_base_bdevs_operational": 2, 00:09:08.497 "base_bdevs_list": [ 00:09:08.497 { 00:09:08.497 "name": "pt1", 00:09:08.497 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:08.497 "is_configured": true, 00:09:08.497 "data_offset": 2048, 00:09:08.497 "data_size": 63488 00:09:08.497 }, 00:09:08.497 { 00:09:08.497 "name": "pt2", 00:09:08.497 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:08.497 "is_configured": true, 00:09:08.497 "data_offset": 2048, 00:09:08.497 "data_size": 63488 00:09:08.497 } 00:09:08.497 ] 00:09:08.497 } 00:09:08.497 } 00:09:08.497 }' 00:09:08.497 19:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:08.497 19:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:08.497 pt2' 00:09:08.497 19:07:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:08.497 19:07:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:08.497 19:07:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:08.497 19:07:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:08.497 19:07:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:08.497 19:07:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.497 19:07:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.497 19:07:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.497 19:07:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:08.497 19:07:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:08.497 19:07:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:08.497 19:07:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:08.497 19:07:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:08.497 19:07:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.497 19:07:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.497 19:07:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.497 19:07:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:08.497 19:07:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:08.497 19:07:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:08.497 19:07:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:09:08.497 19:07:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.497 19:07:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.497 [2024-11-27 19:07:18.101325] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:08.497 19:07:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.757 19:07:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 9ba77285-38de-47a9-8ac5-b1d74e94716d '!=' 9ba77285-38de-47a9-8ac5-b1d74e94716d ']' 00:09:08.757 19:07:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:09:08.757 19:07:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:08.757 19:07:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:08.757 19:07:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:09:08.757 19:07:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.757 19:07:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.757 [2024-11-27 19:07:18.141059] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:09:08.757 19:07:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.757 19:07:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:09:08.757 19:07:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:08.757 19:07:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:08.757 19:07:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:08.757 19:07:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:08.757 19:07:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:09:08.757 19:07:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:08.757 19:07:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:08.757 19:07:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:08.757 19:07:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:08.757 19:07:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:08.757 19:07:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.757 19:07:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:08.757 19:07:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.757 19:07:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.757 19:07:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:08.757 "name": "raid_bdev1", 00:09:08.757 "uuid": "9ba77285-38de-47a9-8ac5-b1d74e94716d", 00:09:08.758 "strip_size_kb": 0, 00:09:08.758 "state": "online", 00:09:08.758 "raid_level": "raid1", 00:09:08.758 "superblock": true, 00:09:08.758 "num_base_bdevs": 2, 00:09:08.758 "num_base_bdevs_discovered": 1, 00:09:08.758 "num_base_bdevs_operational": 1, 00:09:08.758 "base_bdevs_list": [ 00:09:08.758 { 00:09:08.758 "name": null, 00:09:08.758 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:08.758 "is_configured": false, 00:09:08.758 "data_offset": 0, 00:09:08.758 "data_size": 63488 00:09:08.758 }, 00:09:08.758 { 00:09:08.758 "name": "pt2", 00:09:08.758 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:08.758 "is_configured": true, 00:09:08.758 "data_offset": 2048, 00:09:08.758 "data_size": 63488 00:09:08.758 } 00:09:08.758 ] 00:09:08.758 }' 00:09:08.758 19:07:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:08.758 19:07:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.021 19:07:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:09.021 19:07:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.021 19:07:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.021 [2024-11-27 19:07:18.564322] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:09.021 [2024-11-27 19:07:18.564400] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:09.021 [2024-11-27 19:07:18.564511] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:09.021 [2024-11-27 19:07:18.564580] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:09.021 [2024-11-27 19:07:18.564640] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:09.021 19:07:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.021 19:07:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:09.021 19:07:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.021 19:07:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.021 19:07:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:09:09.021 19:07:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.021 19:07:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:09:09.021 19:07:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:09:09.021 19:07:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:09:09.021 19:07:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:09:09.021 19:07:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:09:09.021 19:07:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.021 19:07:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.021 19:07:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.021 19:07:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:09:09.021 19:07:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:09:09.021 19:07:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:09:09.021 19:07:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:09:09.021 19:07:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=1 00:09:09.021 19:07:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:09.021 19:07:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.021 19:07:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.021 [2024-11-27 19:07:18.636172] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:09.021 [2024-11-27 19:07:18.636272] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:09.021 [2024-11-27 19:07:18.636307] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:09.021 [2024-11-27 19:07:18.636337] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:09.021 [2024-11-27 19:07:18.638882] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:09.021 [2024-11-27 19:07:18.638971] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:09.021 [2024-11-27 19:07:18.639081] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:09.021 [2024-11-27 19:07:18.639147] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:09.021 [2024-11-27 19:07:18.639310] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:09.021 [2024-11-27 19:07:18.639353] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:09.021 [2024-11-27 19:07:18.639611] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:09.021 [2024-11-27 19:07:18.639823] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:09.021 [2024-11-27 19:07:18.639867] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:09.021 [2024-11-27 19:07:18.640051] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:09.021 pt2 00:09:09.021 19:07:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.021 19:07:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:09:09.021 19:07:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:09.021 19:07:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:09.021 19:07:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:09.021 19:07:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:09.021 19:07:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:09:09.021 19:07:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:09.021 19:07:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:09.021 19:07:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:09.021 19:07:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:09.021 19:07:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:09.021 19:07:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.021 19:07:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.021 19:07:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:09.281 19:07:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.281 19:07:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:09.281 "name": "raid_bdev1", 00:09:09.281 "uuid": "9ba77285-38de-47a9-8ac5-b1d74e94716d", 00:09:09.281 "strip_size_kb": 0, 00:09:09.281 "state": "online", 00:09:09.281 "raid_level": "raid1", 00:09:09.281 "superblock": true, 00:09:09.281 "num_base_bdevs": 2, 00:09:09.281 "num_base_bdevs_discovered": 1, 00:09:09.281 "num_base_bdevs_operational": 1, 00:09:09.281 "base_bdevs_list": [ 00:09:09.281 { 00:09:09.281 "name": null, 00:09:09.281 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:09.281 "is_configured": false, 00:09:09.281 "data_offset": 2048, 00:09:09.281 "data_size": 63488 00:09:09.281 }, 00:09:09.281 { 00:09:09.281 "name": "pt2", 00:09:09.281 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:09.281 "is_configured": true, 00:09:09.281 "data_offset": 2048, 00:09:09.281 "data_size": 63488 00:09:09.281 } 00:09:09.281 ] 00:09:09.281 }' 00:09:09.281 19:07:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:09.281 19:07:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.540 19:07:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:09.540 19:07:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.540 19:07:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.540 [2024-11-27 19:07:19.111388] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:09.540 [2024-11-27 19:07:19.111422] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:09.540 [2024-11-27 19:07:19.111503] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:09.540 [2024-11-27 19:07:19.111556] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:09.540 [2024-11-27 19:07:19.111566] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:09.540 19:07:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.540 19:07:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:09.540 19:07:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.540 19:07:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.540 19:07:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:09:09.540 19:07:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.540 19:07:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:09:09.540 19:07:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:09:09.540 19:07:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:09:09.540 19:07:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:09.540 19:07:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.540 19:07:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.798 [2024-11-27 19:07:19.175297] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:09.798 [2024-11-27 19:07:19.175415] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:09.798 [2024-11-27 19:07:19.175442] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:09:09.798 [2024-11-27 19:07:19.175453] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:09.798 [2024-11-27 19:07:19.178047] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:09.798 [2024-11-27 19:07:19.178085] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:09.798 [2024-11-27 19:07:19.178180] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:09.798 [2024-11-27 19:07:19.178228] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:09.798 [2024-11-27 19:07:19.178389] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:09:09.798 [2024-11-27 19:07:19.178401] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:09.798 [2024-11-27 19:07:19.178418] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:09:09.798 [2024-11-27 19:07:19.178477] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:09.798 [2024-11-27 19:07:19.178556] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:09:09.798 [2024-11-27 19:07:19.178601] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:09.798 [2024-11-27 19:07:19.178901] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:09:09.798 [2024-11-27 19:07:19.179072] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:09:09.798 [2024-11-27 19:07:19.179086] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:09:09.798 [2024-11-27 19:07:19.179283] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:09.798 pt1 00:09:09.798 19:07:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.798 19:07:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:09:09.798 19:07:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:09:09.798 19:07:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:09.798 19:07:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:09.798 19:07:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:09.798 19:07:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:09.798 19:07:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:09:09.798 19:07:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:09.798 19:07:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:09.798 19:07:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:09.798 19:07:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:09.798 19:07:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:09.798 19:07:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:09.798 19:07:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.798 19:07:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.798 19:07:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.798 19:07:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:09.798 "name": "raid_bdev1", 00:09:09.798 "uuid": "9ba77285-38de-47a9-8ac5-b1d74e94716d", 00:09:09.798 "strip_size_kb": 0, 00:09:09.798 "state": "online", 00:09:09.798 "raid_level": "raid1", 00:09:09.798 "superblock": true, 00:09:09.798 "num_base_bdevs": 2, 00:09:09.798 "num_base_bdevs_discovered": 1, 00:09:09.798 "num_base_bdevs_operational": 1, 00:09:09.798 "base_bdevs_list": [ 00:09:09.798 { 00:09:09.798 "name": null, 00:09:09.798 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:09.798 "is_configured": false, 00:09:09.798 "data_offset": 2048, 00:09:09.798 "data_size": 63488 00:09:09.798 }, 00:09:09.798 { 00:09:09.798 "name": "pt2", 00:09:09.798 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:09.798 "is_configured": true, 00:09:09.798 "data_offset": 2048, 00:09:09.798 "data_size": 63488 00:09:09.798 } 00:09:09.798 ] 00:09:09.798 }' 00:09:09.798 19:07:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:09.798 19:07:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.056 19:07:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:09:10.056 19:07:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.056 19:07:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.056 19:07:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:09:10.056 19:07:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.056 19:07:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:09:10.056 19:07:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:10.056 19:07:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:09:10.056 19:07:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.056 19:07:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.056 [2024-11-27 19:07:19.650789] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:10.056 19:07:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.056 19:07:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 9ba77285-38de-47a9-8ac5-b1d74e94716d '!=' 9ba77285-38de-47a9-8ac5-b1d74e94716d ']' 00:09:10.056 19:07:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 63296 00:09:10.056 19:07:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 63296 ']' 00:09:10.056 19:07:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 63296 00:09:10.315 19:07:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:09:10.315 19:07:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:10.315 19:07:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63296 00:09:10.315 killing process with pid 63296 00:09:10.315 19:07:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:10.315 19:07:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:10.315 19:07:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63296' 00:09:10.315 19:07:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 63296 00:09:10.315 [2024-11-27 19:07:19.735299] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:10.315 [2024-11-27 19:07:19.735398] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:10.315 [2024-11-27 19:07:19.735449] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:10.315 [2024-11-27 19:07:19.735464] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:09:10.315 19:07:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 63296 00:09:10.574 [2024-11-27 19:07:19.957734] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:11.975 19:07:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:09:11.975 00:09:11.975 real 0m6.182s 00:09:11.975 user 0m9.211s 00:09:11.975 sys 0m1.129s 00:09:11.975 19:07:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:11.975 ************************************ 00:09:11.975 END TEST raid_superblock_test 00:09:11.975 ************************************ 00:09:11.975 19:07:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.975 19:07:21 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 2 read 00:09:11.975 19:07:21 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:11.975 19:07:21 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:11.975 19:07:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:11.975 ************************************ 00:09:11.975 START TEST raid_read_error_test 00:09:11.975 ************************************ 00:09:11.975 19:07:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 2 read 00:09:11.975 19:07:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:09:11.975 19:07:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:09:11.975 19:07:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:09:11.975 19:07:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:11.975 19:07:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:11.975 19:07:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:11.975 19:07:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:11.975 19:07:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:11.975 19:07:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:11.975 19:07:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:11.975 19:07:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:11.975 19:07:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:11.975 19:07:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:11.975 19:07:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:11.975 19:07:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:11.975 19:07:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:11.975 19:07:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:11.975 19:07:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:11.975 19:07:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:09:11.975 19:07:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:09:11.975 19:07:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:11.975 19:07:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.z2UG5P81Dr 00:09:11.975 19:07:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63626 00:09:11.975 19:07:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:11.975 19:07:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63626 00:09:11.975 19:07:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 63626 ']' 00:09:11.975 19:07:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:11.975 19:07:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:11.975 19:07:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:11.975 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:11.975 19:07:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:11.975 19:07:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.975 [2024-11-27 19:07:21.354877] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:09:11.975 [2024-11-27 19:07:21.355118] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63626 ] 00:09:11.975 [2024-11-27 19:07:21.533852] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:12.235 [2024-11-27 19:07:21.671986] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:12.495 [2024-11-27 19:07:21.906282] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:12.495 [2024-11-27 19:07:21.906330] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:12.756 19:07:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:12.756 19:07:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:12.756 19:07:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:12.756 19:07:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:12.756 19:07:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.756 19:07:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.756 BaseBdev1_malloc 00:09:12.756 19:07:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.756 19:07:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:12.756 19:07:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.756 19:07:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.756 true 00:09:12.756 19:07:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.756 19:07:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:12.756 19:07:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.756 19:07:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.756 [2024-11-27 19:07:22.250095] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:12.756 [2024-11-27 19:07:22.250156] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:12.756 [2024-11-27 19:07:22.250178] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:12.756 [2024-11-27 19:07:22.250190] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:12.756 [2024-11-27 19:07:22.252613] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:12.756 [2024-11-27 19:07:22.252765] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:12.756 BaseBdev1 00:09:12.756 19:07:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.756 19:07:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:12.756 19:07:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:12.756 19:07:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.756 19:07:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.756 BaseBdev2_malloc 00:09:12.756 19:07:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.756 19:07:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:12.756 19:07:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.756 19:07:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.756 true 00:09:12.756 19:07:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.756 19:07:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:12.756 19:07:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.756 19:07:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.756 [2024-11-27 19:07:22.323911] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:12.756 [2024-11-27 19:07:22.324011] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:12.756 [2024-11-27 19:07:22.324032] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:12.756 [2024-11-27 19:07:22.324044] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:12.756 [2024-11-27 19:07:22.326403] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:12.756 [2024-11-27 19:07:22.326444] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:12.756 BaseBdev2 00:09:12.756 19:07:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.756 19:07:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:09:12.756 19:07:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.756 19:07:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.756 [2024-11-27 19:07:22.335951] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:12.756 [2024-11-27 19:07:22.337943] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:12.756 [2024-11-27 19:07:22.338148] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:12.756 [2024-11-27 19:07:22.338171] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:12.756 [2024-11-27 19:07:22.338421] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:12.756 [2024-11-27 19:07:22.338609] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:12.756 [2024-11-27 19:07:22.338619] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:12.756 [2024-11-27 19:07:22.338771] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:12.756 19:07:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.756 19:07:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:12.756 19:07:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:12.756 19:07:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:12.756 19:07:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:12.756 19:07:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:12.756 19:07:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:12.756 19:07:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:12.756 19:07:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:12.756 19:07:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:12.757 19:07:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:12.757 19:07:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:12.757 19:07:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.757 19:07:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.757 19:07:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:12.757 19:07:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.016 19:07:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:13.016 "name": "raid_bdev1", 00:09:13.016 "uuid": "78771545-0f77-438e-8d9b-1348c2f9e9b0", 00:09:13.016 "strip_size_kb": 0, 00:09:13.016 "state": "online", 00:09:13.016 "raid_level": "raid1", 00:09:13.016 "superblock": true, 00:09:13.016 "num_base_bdevs": 2, 00:09:13.016 "num_base_bdevs_discovered": 2, 00:09:13.016 "num_base_bdevs_operational": 2, 00:09:13.016 "base_bdevs_list": [ 00:09:13.016 { 00:09:13.016 "name": "BaseBdev1", 00:09:13.016 "uuid": "20dccffd-bd37-56e4-856b-8d9a46d0e530", 00:09:13.016 "is_configured": true, 00:09:13.016 "data_offset": 2048, 00:09:13.016 "data_size": 63488 00:09:13.016 }, 00:09:13.016 { 00:09:13.016 "name": "BaseBdev2", 00:09:13.016 "uuid": "ae1a0066-a202-510e-bcff-67270bc21e89", 00:09:13.016 "is_configured": true, 00:09:13.016 "data_offset": 2048, 00:09:13.016 "data_size": 63488 00:09:13.016 } 00:09:13.016 ] 00:09:13.016 }' 00:09:13.016 19:07:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:13.016 19:07:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.276 19:07:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:13.276 19:07:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:13.535 [2024-11-27 19:07:22.912261] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:09:14.474 19:07:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:14.474 19:07:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.474 19:07:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.474 19:07:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.474 19:07:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:14.474 19:07:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:09:14.474 19:07:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:09:14.474 19:07:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:09:14.474 19:07:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:14.474 19:07:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:14.474 19:07:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:14.474 19:07:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:14.474 19:07:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:14.474 19:07:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:14.474 19:07:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:14.474 19:07:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:14.474 19:07:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:14.474 19:07:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:14.474 19:07:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:14.474 19:07:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:14.474 19:07:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.474 19:07:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.474 19:07:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.474 19:07:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:14.474 "name": "raid_bdev1", 00:09:14.474 "uuid": "78771545-0f77-438e-8d9b-1348c2f9e9b0", 00:09:14.474 "strip_size_kb": 0, 00:09:14.474 "state": "online", 00:09:14.474 "raid_level": "raid1", 00:09:14.474 "superblock": true, 00:09:14.474 "num_base_bdevs": 2, 00:09:14.474 "num_base_bdevs_discovered": 2, 00:09:14.474 "num_base_bdevs_operational": 2, 00:09:14.474 "base_bdevs_list": [ 00:09:14.474 { 00:09:14.474 "name": "BaseBdev1", 00:09:14.474 "uuid": "20dccffd-bd37-56e4-856b-8d9a46d0e530", 00:09:14.474 "is_configured": true, 00:09:14.474 "data_offset": 2048, 00:09:14.474 "data_size": 63488 00:09:14.474 }, 00:09:14.474 { 00:09:14.474 "name": "BaseBdev2", 00:09:14.474 "uuid": "ae1a0066-a202-510e-bcff-67270bc21e89", 00:09:14.474 "is_configured": true, 00:09:14.474 "data_offset": 2048, 00:09:14.474 "data_size": 63488 00:09:14.474 } 00:09:14.474 ] 00:09:14.474 }' 00:09:14.474 19:07:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:14.474 19:07:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.733 19:07:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:14.733 19:07:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.733 19:07:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.733 [2024-11-27 19:07:24.242424] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:14.733 [2024-11-27 19:07:24.242546] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:14.733 [2024-11-27 19:07:24.245455] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:14.733 [2024-11-27 19:07:24.245549] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:14.733 [2024-11-27 19:07:24.245673] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:14.733 [2024-11-27 19:07:24.245758] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:14.733 19:07:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.733 { 00:09:14.733 "results": [ 00:09:14.733 { 00:09:14.733 "job": "raid_bdev1", 00:09:14.733 "core_mask": "0x1", 00:09:14.733 "workload": "randrw", 00:09:14.733 "percentage": 50, 00:09:14.733 "status": "finished", 00:09:14.733 "queue_depth": 1, 00:09:14.733 "io_size": 131072, 00:09:14.733 "runtime": 1.330792, 00:09:14.733 "iops": 14558.999452957336, 00:09:14.733 "mibps": 1819.874931619667, 00:09:14.733 "io_failed": 0, 00:09:14.733 "io_timeout": 0, 00:09:14.733 "avg_latency_us": 66.13649471756585, 00:09:14.733 "min_latency_us": 22.69344978165939, 00:09:14.733 "max_latency_us": 1395.1441048034935 00:09:14.733 } 00:09:14.733 ], 00:09:14.733 "core_count": 1 00:09:14.733 } 00:09:14.733 19:07:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63626 00:09:14.733 19:07:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 63626 ']' 00:09:14.733 19:07:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 63626 00:09:14.733 19:07:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:09:14.733 19:07:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:14.733 19:07:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63626 00:09:14.733 19:07:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:14.733 19:07:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:14.733 19:07:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63626' 00:09:14.733 killing process with pid 63626 00:09:14.733 19:07:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 63626 00:09:14.733 [2024-11-27 19:07:24.291343] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:14.733 19:07:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 63626 00:09:14.993 [2024-11-27 19:07:24.437518] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:16.374 19:07:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:16.374 19:07:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.z2UG5P81Dr 00:09:16.374 19:07:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:16.374 19:07:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:09:16.374 19:07:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:09:16.374 ************************************ 00:09:16.374 END TEST raid_read_error_test 00:09:16.374 ************************************ 00:09:16.374 19:07:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:16.374 19:07:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:16.374 19:07:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:09:16.374 00:09:16.374 real 0m4.461s 00:09:16.374 user 0m5.185s 00:09:16.374 sys 0m0.677s 00:09:16.374 19:07:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:16.374 19:07:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.374 19:07:25 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 2 write 00:09:16.374 19:07:25 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:16.374 19:07:25 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:16.374 19:07:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:16.374 ************************************ 00:09:16.374 START TEST raid_write_error_test 00:09:16.374 ************************************ 00:09:16.374 19:07:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 2 write 00:09:16.374 19:07:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:09:16.374 19:07:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:09:16.374 19:07:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:09:16.374 19:07:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:16.374 19:07:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:16.374 19:07:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:16.374 19:07:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:16.374 19:07:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:16.374 19:07:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:16.374 19:07:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:16.374 19:07:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:16.374 19:07:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:16.374 19:07:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:16.374 19:07:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:16.374 19:07:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:16.374 19:07:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:16.374 19:07:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:16.374 19:07:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:16.374 19:07:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:09:16.374 19:07:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:09:16.374 19:07:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:16.374 19:07:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.aEqZ2p1syf 00:09:16.374 19:07:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63773 00:09:16.374 19:07:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63773 00:09:16.374 19:07:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:16.374 19:07:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 63773 ']' 00:09:16.374 19:07:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:16.374 19:07:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:16.374 19:07:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:16.374 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:16.374 19:07:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:16.374 19:07:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.374 [2024-11-27 19:07:25.892825] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:09:16.374 [2024-11-27 19:07:25.892994] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63773 ] 00:09:16.634 [2024-11-27 19:07:26.065119] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:16.634 [2024-11-27 19:07:26.204775] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:16.893 [2024-11-27 19:07:26.434289] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:16.893 [2024-11-27 19:07:26.434334] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:17.153 19:07:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:17.153 19:07:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:17.153 19:07:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:17.153 19:07:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:17.153 19:07:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.153 19:07:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.153 BaseBdev1_malloc 00:09:17.153 19:07:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.153 19:07:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:17.153 19:07:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.153 19:07:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.153 true 00:09:17.153 19:07:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.153 19:07:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:17.153 19:07:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.153 19:07:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.153 [2024-11-27 19:07:26.780143] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:17.153 [2024-11-27 19:07:26.780204] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:17.153 [2024-11-27 19:07:26.780225] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:17.153 [2024-11-27 19:07:26.780237] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:17.153 [2024-11-27 19:07:26.782644] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:17.153 [2024-11-27 19:07:26.782758] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:17.153 BaseBdev1 00:09:17.153 19:07:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.153 19:07:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:17.153 19:07:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:17.153 19:07:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.153 19:07:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.413 BaseBdev2_malloc 00:09:17.413 19:07:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.413 19:07:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:17.413 19:07:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.413 19:07:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.413 true 00:09:17.413 19:07:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.413 19:07:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:17.413 19:07:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.413 19:07:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.413 [2024-11-27 19:07:26.852498] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:17.413 [2024-11-27 19:07:26.852614] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:17.413 [2024-11-27 19:07:26.852635] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:17.413 [2024-11-27 19:07:26.852647] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:17.413 [2024-11-27 19:07:26.855017] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:17.413 [2024-11-27 19:07:26.855056] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:17.413 BaseBdev2 00:09:17.413 19:07:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.413 19:07:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:09:17.413 19:07:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.413 19:07:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.413 [2024-11-27 19:07:26.864540] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:17.413 [2024-11-27 19:07:26.866692] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:17.413 [2024-11-27 19:07:26.866998] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:17.413 [2024-11-27 19:07:26.867019] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:17.413 [2024-11-27 19:07:26.867273] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:17.413 [2024-11-27 19:07:26.867462] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:17.413 [2024-11-27 19:07:26.867473] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:17.413 [2024-11-27 19:07:26.867621] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:17.413 19:07:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.413 19:07:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:17.413 19:07:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:17.413 19:07:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:17.413 19:07:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:17.413 19:07:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:17.413 19:07:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:17.413 19:07:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:17.413 19:07:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:17.413 19:07:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:17.413 19:07:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:17.413 19:07:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:17.413 19:07:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:17.413 19:07:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.413 19:07:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.413 19:07:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.413 19:07:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:17.413 "name": "raid_bdev1", 00:09:17.413 "uuid": "e07aca6b-08c4-4cfa-a57e-7ed5ef67f74e", 00:09:17.413 "strip_size_kb": 0, 00:09:17.413 "state": "online", 00:09:17.413 "raid_level": "raid1", 00:09:17.413 "superblock": true, 00:09:17.413 "num_base_bdevs": 2, 00:09:17.413 "num_base_bdevs_discovered": 2, 00:09:17.413 "num_base_bdevs_operational": 2, 00:09:17.413 "base_bdevs_list": [ 00:09:17.413 { 00:09:17.413 "name": "BaseBdev1", 00:09:17.413 "uuid": "064e8303-ff71-55c8-9f20-9246b9de9ed3", 00:09:17.413 "is_configured": true, 00:09:17.413 "data_offset": 2048, 00:09:17.413 "data_size": 63488 00:09:17.413 }, 00:09:17.413 { 00:09:17.413 "name": "BaseBdev2", 00:09:17.413 "uuid": "b01b6ab9-a391-53f6-90d7-ad6e6a558ac4", 00:09:17.413 "is_configured": true, 00:09:17.413 "data_offset": 2048, 00:09:17.413 "data_size": 63488 00:09:17.413 } 00:09:17.413 ] 00:09:17.413 }' 00:09:17.413 19:07:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:17.413 19:07:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.672 19:07:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:17.672 19:07:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:17.932 [2024-11-27 19:07:27.385142] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:09:18.871 19:07:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:18.871 19:07:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.871 19:07:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.871 [2024-11-27 19:07:28.299916] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:09:18.871 [2024-11-27 19:07:28.299990] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:18.871 [2024-11-27 19:07:28.300204] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000063c0 00:09:18.871 19:07:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.871 19:07:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:18.871 19:07:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:09:18.871 19:07:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:09:18.871 19:07:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=1 00:09:18.871 19:07:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:09:18.871 19:07:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:18.871 19:07:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:18.871 19:07:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:18.871 19:07:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:18.871 19:07:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:09:18.871 19:07:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:18.871 19:07:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:18.871 19:07:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:18.871 19:07:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:18.871 19:07:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:18.871 19:07:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:18.871 19:07:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.871 19:07:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.871 19:07:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.871 19:07:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:18.871 "name": "raid_bdev1", 00:09:18.871 "uuid": "e07aca6b-08c4-4cfa-a57e-7ed5ef67f74e", 00:09:18.871 "strip_size_kb": 0, 00:09:18.871 "state": "online", 00:09:18.871 "raid_level": "raid1", 00:09:18.871 "superblock": true, 00:09:18.871 "num_base_bdevs": 2, 00:09:18.871 "num_base_bdevs_discovered": 1, 00:09:18.871 "num_base_bdevs_operational": 1, 00:09:18.871 "base_bdevs_list": [ 00:09:18.871 { 00:09:18.871 "name": null, 00:09:18.871 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:18.871 "is_configured": false, 00:09:18.871 "data_offset": 0, 00:09:18.871 "data_size": 63488 00:09:18.871 }, 00:09:18.871 { 00:09:18.871 "name": "BaseBdev2", 00:09:18.871 "uuid": "b01b6ab9-a391-53f6-90d7-ad6e6a558ac4", 00:09:18.871 "is_configured": true, 00:09:18.871 "data_offset": 2048, 00:09:18.871 "data_size": 63488 00:09:18.871 } 00:09:18.871 ] 00:09:18.871 }' 00:09:18.871 19:07:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:18.871 19:07:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.131 19:07:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:19.131 19:07:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.131 19:07:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.131 [2024-11-27 19:07:28.753469] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:19.131 [2024-11-27 19:07:28.753507] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:19.131 [2024-11-27 19:07:28.756104] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:19.131 [2024-11-27 19:07:28.756154] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:19.131 [2024-11-27 19:07:28.756223] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:19.131 [2024-11-27 19:07:28.756242] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:19.131 { 00:09:19.131 "results": [ 00:09:19.131 { 00:09:19.131 "job": "raid_bdev1", 00:09:19.131 "core_mask": "0x1", 00:09:19.131 "workload": "randrw", 00:09:19.131 "percentage": 50, 00:09:19.131 "status": "finished", 00:09:19.131 "queue_depth": 1, 00:09:19.131 "io_size": 131072, 00:09:19.131 "runtime": 1.368902, 00:09:19.131 "iops": 17806.241790865963, 00:09:19.131 "mibps": 2225.7802238582453, 00:09:19.131 "io_failed": 0, 00:09:19.131 "io_timeout": 0, 00:09:19.131 "avg_latency_us": 53.55051282947038, 00:09:19.131 "min_latency_us": 21.240174672489083, 00:09:19.131 "max_latency_us": 1352.216593886463 00:09:19.131 } 00:09:19.131 ], 00:09:19.131 "core_count": 1 00:09:19.131 } 00:09:19.131 19:07:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.131 19:07:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63773 00:09:19.131 19:07:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 63773 ']' 00:09:19.131 19:07:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 63773 00:09:19.131 19:07:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:09:19.391 19:07:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:19.391 19:07:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63773 00:09:19.391 19:07:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:19.391 19:07:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:19.391 killing process with pid 63773 00:09:19.391 19:07:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63773' 00:09:19.391 19:07:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 63773 00:09:19.391 [2024-11-27 19:07:28.801267] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:19.391 19:07:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 63773 00:09:19.391 [2024-11-27 19:07:28.947721] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:20.874 19:07:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:20.874 19:07:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.aEqZ2p1syf 00:09:20.874 19:07:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:20.874 19:07:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:09:20.874 19:07:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:09:20.874 19:07:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:20.874 19:07:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:20.874 19:07:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:09:20.874 00:09:20.874 real 0m4.441s 00:09:20.874 user 0m5.192s 00:09:20.874 sys 0m0.636s 00:09:20.874 19:07:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:20.874 19:07:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.874 ************************************ 00:09:20.874 END TEST raid_write_error_test 00:09:20.874 ************************************ 00:09:20.874 19:07:30 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:09:20.874 19:07:30 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:09:20.874 19:07:30 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:09:20.874 19:07:30 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:20.874 19:07:30 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:20.874 19:07:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:20.874 ************************************ 00:09:20.874 START TEST raid_state_function_test 00:09:20.874 ************************************ 00:09:20.874 19:07:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 3 false 00:09:20.874 19:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:09:20.874 19:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:20.874 19:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:09:20.874 19:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:20.874 19:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:20.874 19:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:20.874 19:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:20.874 19:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:20.874 19:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:20.874 19:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:20.874 19:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:20.874 19:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:20.874 19:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:20.874 19:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:20.874 19:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:20.874 19:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:20.874 19:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:20.874 19:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:20.874 19:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:20.874 19:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:20.874 19:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:20.874 19:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:09:20.874 19:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:20.874 19:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:20.874 19:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:09:20.874 19:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:09:20.874 19:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=63915 00:09:20.874 19:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:20.874 19:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 63915' 00:09:20.874 Process raid pid: 63915 00:09:20.874 19:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 63915 00:09:20.874 19:07:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 63915 ']' 00:09:20.874 19:07:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:20.874 19:07:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:20.874 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:20.874 19:07:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:20.874 19:07:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:20.874 19:07:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.874 [2024-11-27 19:07:30.398432] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:09:20.874 [2024-11-27 19:07:30.398545] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:21.134 [2024-11-27 19:07:30.574725] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:21.134 [2024-11-27 19:07:30.703859] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:21.394 [2024-11-27 19:07:30.940596] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:21.394 [2024-11-27 19:07:30.940641] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:21.655 19:07:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:21.655 19:07:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:09:21.655 19:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:21.655 19:07:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.655 19:07:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.655 [2024-11-27 19:07:31.219538] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:21.655 [2024-11-27 19:07:31.219618] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:21.655 [2024-11-27 19:07:31.219629] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:21.655 [2024-11-27 19:07:31.219640] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:21.655 [2024-11-27 19:07:31.219647] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:21.655 [2024-11-27 19:07:31.219658] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:21.655 19:07:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.655 19:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:21.655 19:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:21.655 19:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:21.655 19:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:21.655 19:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:21.655 19:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:21.655 19:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:21.655 19:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:21.655 19:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:21.655 19:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:21.655 19:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:21.655 19:07:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.655 19:07:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.655 19:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:21.655 19:07:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.655 19:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:21.655 "name": "Existed_Raid", 00:09:21.655 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:21.655 "strip_size_kb": 64, 00:09:21.655 "state": "configuring", 00:09:21.655 "raid_level": "raid0", 00:09:21.655 "superblock": false, 00:09:21.655 "num_base_bdevs": 3, 00:09:21.655 "num_base_bdevs_discovered": 0, 00:09:21.655 "num_base_bdevs_operational": 3, 00:09:21.655 "base_bdevs_list": [ 00:09:21.655 { 00:09:21.655 "name": "BaseBdev1", 00:09:21.655 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:21.655 "is_configured": false, 00:09:21.655 "data_offset": 0, 00:09:21.655 "data_size": 0 00:09:21.655 }, 00:09:21.655 { 00:09:21.655 "name": "BaseBdev2", 00:09:21.655 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:21.655 "is_configured": false, 00:09:21.655 "data_offset": 0, 00:09:21.655 "data_size": 0 00:09:21.655 }, 00:09:21.655 { 00:09:21.655 "name": "BaseBdev3", 00:09:21.655 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:21.655 "is_configured": false, 00:09:21.655 "data_offset": 0, 00:09:21.655 "data_size": 0 00:09:21.655 } 00:09:21.655 ] 00:09:21.655 }' 00:09:21.655 19:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:21.655 19:07:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.226 19:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:22.226 19:07:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.226 19:07:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.226 [2024-11-27 19:07:31.694720] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:22.226 [2024-11-27 19:07:31.694760] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:22.226 19:07:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.226 19:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:22.226 19:07:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.226 19:07:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.226 [2024-11-27 19:07:31.706681] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:22.226 [2024-11-27 19:07:31.706761] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:22.226 [2024-11-27 19:07:31.706770] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:22.226 [2024-11-27 19:07:31.706780] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:22.226 [2024-11-27 19:07:31.706786] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:22.226 [2024-11-27 19:07:31.706795] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:22.226 19:07:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.226 19:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:22.226 19:07:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.226 19:07:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.226 [2024-11-27 19:07:31.760415] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:22.226 BaseBdev1 00:09:22.226 19:07:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.226 19:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:22.226 19:07:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:22.226 19:07:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:22.226 19:07:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:22.226 19:07:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:22.226 19:07:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:22.226 19:07:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:22.226 19:07:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.226 19:07:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.226 19:07:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.226 19:07:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:22.226 19:07:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.226 19:07:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.226 [ 00:09:22.226 { 00:09:22.226 "name": "BaseBdev1", 00:09:22.226 "aliases": [ 00:09:22.226 "f32737e4-6ceb-41f7-90b6-845223163223" 00:09:22.226 ], 00:09:22.226 "product_name": "Malloc disk", 00:09:22.226 "block_size": 512, 00:09:22.226 "num_blocks": 65536, 00:09:22.226 "uuid": "f32737e4-6ceb-41f7-90b6-845223163223", 00:09:22.226 "assigned_rate_limits": { 00:09:22.226 "rw_ios_per_sec": 0, 00:09:22.226 "rw_mbytes_per_sec": 0, 00:09:22.226 "r_mbytes_per_sec": 0, 00:09:22.226 "w_mbytes_per_sec": 0 00:09:22.226 }, 00:09:22.226 "claimed": true, 00:09:22.226 "claim_type": "exclusive_write", 00:09:22.226 "zoned": false, 00:09:22.226 "supported_io_types": { 00:09:22.226 "read": true, 00:09:22.226 "write": true, 00:09:22.226 "unmap": true, 00:09:22.226 "flush": true, 00:09:22.226 "reset": true, 00:09:22.226 "nvme_admin": false, 00:09:22.226 "nvme_io": false, 00:09:22.226 "nvme_io_md": false, 00:09:22.226 "write_zeroes": true, 00:09:22.226 "zcopy": true, 00:09:22.226 "get_zone_info": false, 00:09:22.226 "zone_management": false, 00:09:22.226 "zone_append": false, 00:09:22.226 "compare": false, 00:09:22.226 "compare_and_write": false, 00:09:22.226 "abort": true, 00:09:22.226 "seek_hole": false, 00:09:22.226 "seek_data": false, 00:09:22.226 "copy": true, 00:09:22.226 "nvme_iov_md": false 00:09:22.226 }, 00:09:22.226 "memory_domains": [ 00:09:22.226 { 00:09:22.226 "dma_device_id": "system", 00:09:22.226 "dma_device_type": 1 00:09:22.226 }, 00:09:22.226 { 00:09:22.226 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:22.226 "dma_device_type": 2 00:09:22.226 } 00:09:22.226 ], 00:09:22.226 "driver_specific": {} 00:09:22.226 } 00:09:22.226 ] 00:09:22.227 19:07:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.227 19:07:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:22.227 19:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:22.227 19:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:22.227 19:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:22.227 19:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:22.227 19:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:22.227 19:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:22.227 19:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:22.227 19:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:22.227 19:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:22.227 19:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:22.227 19:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:22.227 19:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:22.227 19:07:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.227 19:07:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.227 19:07:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.227 19:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:22.227 "name": "Existed_Raid", 00:09:22.227 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:22.227 "strip_size_kb": 64, 00:09:22.227 "state": "configuring", 00:09:22.227 "raid_level": "raid0", 00:09:22.227 "superblock": false, 00:09:22.227 "num_base_bdevs": 3, 00:09:22.227 "num_base_bdevs_discovered": 1, 00:09:22.227 "num_base_bdevs_operational": 3, 00:09:22.227 "base_bdevs_list": [ 00:09:22.227 { 00:09:22.227 "name": "BaseBdev1", 00:09:22.227 "uuid": "f32737e4-6ceb-41f7-90b6-845223163223", 00:09:22.227 "is_configured": true, 00:09:22.227 "data_offset": 0, 00:09:22.227 "data_size": 65536 00:09:22.227 }, 00:09:22.227 { 00:09:22.227 "name": "BaseBdev2", 00:09:22.227 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:22.227 "is_configured": false, 00:09:22.227 "data_offset": 0, 00:09:22.227 "data_size": 0 00:09:22.227 }, 00:09:22.227 { 00:09:22.227 "name": "BaseBdev3", 00:09:22.227 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:22.227 "is_configured": false, 00:09:22.227 "data_offset": 0, 00:09:22.227 "data_size": 0 00:09:22.227 } 00:09:22.227 ] 00:09:22.227 }' 00:09:22.227 19:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:22.227 19:07:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.796 19:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:22.796 19:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.796 19:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.796 [2024-11-27 19:07:32.247596] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:22.796 [2024-11-27 19:07:32.247652] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:22.796 19:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.796 19:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:22.797 19:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.797 19:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.797 [2024-11-27 19:07:32.259626] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:22.797 [2024-11-27 19:07:32.261757] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:22.797 [2024-11-27 19:07:32.261799] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:22.797 [2024-11-27 19:07:32.261810] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:22.797 [2024-11-27 19:07:32.261819] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:22.797 19:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.797 19:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:22.797 19:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:22.797 19:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:22.797 19:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:22.797 19:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:22.797 19:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:22.797 19:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:22.797 19:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:22.797 19:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:22.797 19:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:22.797 19:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:22.797 19:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:22.797 19:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:22.797 19:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:22.797 19:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.797 19:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.797 19:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.797 19:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:22.797 "name": "Existed_Raid", 00:09:22.797 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:22.797 "strip_size_kb": 64, 00:09:22.797 "state": "configuring", 00:09:22.797 "raid_level": "raid0", 00:09:22.797 "superblock": false, 00:09:22.797 "num_base_bdevs": 3, 00:09:22.797 "num_base_bdevs_discovered": 1, 00:09:22.797 "num_base_bdevs_operational": 3, 00:09:22.797 "base_bdevs_list": [ 00:09:22.797 { 00:09:22.797 "name": "BaseBdev1", 00:09:22.797 "uuid": "f32737e4-6ceb-41f7-90b6-845223163223", 00:09:22.797 "is_configured": true, 00:09:22.797 "data_offset": 0, 00:09:22.797 "data_size": 65536 00:09:22.797 }, 00:09:22.797 { 00:09:22.797 "name": "BaseBdev2", 00:09:22.797 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:22.797 "is_configured": false, 00:09:22.797 "data_offset": 0, 00:09:22.797 "data_size": 0 00:09:22.797 }, 00:09:22.797 { 00:09:22.797 "name": "BaseBdev3", 00:09:22.797 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:22.797 "is_configured": false, 00:09:22.797 "data_offset": 0, 00:09:22.797 "data_size": 0 00:09:22.797 } 00:09:22.797 ] 00:09:22.797 }' 00:09:22.797 19:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:22.797 19:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.368 19:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:23.368 19:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.368 19:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.368 [2024-11-27 19:07:32.751045] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:23.368 BaseBdev2 00:09:23.368 19:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.368 19:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:23.368 19:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:23.368 19:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:23.368 19:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:23.368 19:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:23.368 19:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:23.368 19:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:23.368 19:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.368 19:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.368 19:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.368 19:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:23.368 19:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.368 19:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.368 [ 00:09:23.368 { 00:09:23.368 "name": "BaseBdev2", 00:09:23.368 "aliases": [ 00:09:23.368 "6f734dcd-533e-4b0b-9e66-5a3ea9d53c40" 00:09:23.368 ], 00:09:23.368 "product_name": "Malloc disk", 00:09:23.368 "block_size": 512, 00:09:23.368 "num_blocks": 65536, 00:09:23.368 "uuid": "6f734dcd-533e-4b0b-9e66-5a3ea9d53c40", 00:09:23.368 "assigned_rate_limits": { 00:09:23.368 "rw_ios_per_sec": 0, 00:09:23.368 "rw_mbytes_per_sec": 0, 00:09:23.368 "r_mbytes_per_sec": 0, 00:09:23.368 "w_mbytes_per_sec": 0 00:09:23.368 }, 00:09:23.368 "claimed": true, 00:09:23.368 "claim_type": "exclusive_write", 00:09:23.368 "zoned": false, 00:09:23.368 "supported_io_types": { 00:09:23.368 "read": true, 00:09:23.368 "write": true, 00:09:23.368 "unmap": true, 00:09:23.368 "flush": true, 00:09:23.368 "reset": true, 00:09:23.368 "nvme_admin": false, 00:09:23.368 "nvme_io": false, 00:09:23.368 "nvme_io_md": false, 00:09:23.368 "write_zeroes": true, 00:09:23.368 "zcopy": true, 00:09:23.368 "get_zone_info": false, 00:09:23.368 "zone_management": false, 00:09:23.368 "zone_append": false, 00:09:23.368 "compare": false, 00:09:23.368 "compare_and_write": false, 00:09:23.368 "abort": true, 00:09:23.368 "seek_hole": false, 00:09:23.368 "seek_data": false, 00:09:23.368 "copy": true, 00:09:23.368 "nvme_iov_md": false 00:09:23.368 }, 00:09:23.368 "memory_domains": [ 00:09:23.368 { 00:09:23.368 "dma_device_id": "system", 00:09:23.368 "dma_device_type": 1 00:09:23.368 }, 00:09:23.368 { 00:09:23.368 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:23.368 "dma_device_type": 2 00:09:23.368 } 00:09:23.368 ], 00:09:23.368 "driver_specific": {} 00:09:23.368 } 00:09:23.368 ] 00:09:23.368 19:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.368 19:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:23.368 19:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:23.368 19:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:23.368 19:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:23.368 19:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:23.368 19:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:23.368 19:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:23.368 19:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:23.368 19:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:23.368 19:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:23.368 19:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:23.368 19:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:23.368 19:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:23.368 19:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:23.368 19:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:23.368 19:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.369 19:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.369 19:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.369 19:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:23.369 "name": "Existed_Raid", 00:09:23.369 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:23.369 "strip_size_kb": 64, 00:09:23.369 "state": "configuring", 00:09:23.369 "raid_level": "raid0", 00:09:23.369 "superblock": false, 00:09:23.369 "num_base_bdevs": 3, 00:09:23.369 "num_base_bdevs_discovered": 2, 00:09:23.369 "num_base_bdevs_operational": 3, 00:09:23.369 "base_bdevs_list": [ 00:09:23.369 { 00:09:23.369 "name": "BaseBdev1", 00:09:23.369 "uuid": "f32737e4-6ceb-41f7-90b6-845223163223", 00:09:23.369 "is_configured": true, 00:09:23.369 "data_offset": 0, 00:09:23.369 "data_size": 65536 00:09:23.369 }, 00:09:23.369 { 00:09:23.369 "name": "BaseBdev2", 00:09:23.369 "uuid": "6f734dcd-533e-4b0b-9e66-5a3ea9d53c40", 00:09:23.369 "is_configured": true, 00:09:23.369 "data_offset": 0, 00:09:23.369 "data_size": 65536 00:09:23.369 }, 00:09:23.369 { 00:09:23.369 "name": "BaseBdev3", 00:09:23.369 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:23.369 "is_configured": false, 00:09:23.369 "data_offset": 0, 00:09:23.369 "data_size": 0 00:09:23.369 } 00:09:23.369 ] 00:09:23.369 }' 00:09:23.369 19:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:23.369 19:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.628 19:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:23.628 19:07:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.628 19:07:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.888 [2024-11-27 19:07:33.300119] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:23.888 [2024-11-27 19:07:33.300189] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:23.888 [2024-11-27 19:07:33.300221] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:09:23.888 [2024-11-27 19:07:33.300744] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:23.888 [2024-11-27 19:07:33.300960] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:23.888 [2024-11-27 19:07:33.300978] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:23.888 [2024-11-27 19:07:33.301281] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:23.888 BaseBdev3 00:09:23.888 19:07:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.888 19:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:23.888 19:07:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:23.888 19:07:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:23.888 19:07:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:23.888 19:07:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:23.888 19:07:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:23.888 19:07:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:23.888 19:07:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.888 19:07:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.888 19:07:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.888 19:07:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:23.888 19:07:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.888 19:07:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.888 [ 00:09:23.888 { 00:09:23.888 "name": "BaseBdev3", 00:09:23.888 "aliases": [ 00:09:23.888 "b7f81a24-28e8-4024-9e6e-ab5be4fdb6eb" 00:09:23.888 ], 00:09:23.888 "product_name": "Malloc disk", 00:09:23.888 "block_size": 512, 00:09:23.888 "num_blocks": 65536, 00:09:23.888 "uuid": "b7f81a24-28e8-4024-9e6e-ab5be4fdb6eb", 00:09:23.888 "assigned_rate_limits": { 00:09:23.888 "rw_ios_per_sec": 0, 00:09:23.888 "rw_mbytes_per_sec": 0, 00:09:23.888 "r_mbytes_per_sec": 0, 00:09:23.888 "w_mbytes_per_sec": 0 00:09:23.888 }, 00:09:23.888 "claimed": true, 00:09:23.888 "claim_type": "exclusive_write", 00:09:23.888 "zoned": false, 00:09:23.888 "supported_io_types": { 00:09:23.888 "read": true, 00:09:23.888 "write": true, 00:09:23.888 "unmap": true, 00:09:23.888 "flush": true, 00:09:23.888 "reset": true, 00:09:23.888 "nvme_admin": false, 00:09:23.888 "nvme_io": false, 00:09:23.888 "nvme_io_md": false, 00:09:23.888 "write_zeroes": true, 00:09:23.888 "zcopy": true, 00:09:23.888 "get_zone_info": false, 00:09:23.888 "zone_management": false, 00:09:23.888 "zone_append": false, 00:09:23.888 "compare": false, 00:09:23.888 "compare_and_write": false, 00:09:23.888 "abort": true, 00:09:23.888 "seek_hole": false, 00:09:23.888 "seek_data": false, 00:09:23.888 "copy": true, 00:09:23.888 "nvme_iov_md": false 00:09:23.888 }, 00:09:23.888 "memory_domains": [ 00:09:23.888 { 00:09:23.888 "dma_device_id": "system", 00:09:23.888 "dma_device_type": 1 00:09:23.888 }, 00:09:23.888 { 00:09:23.888 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:23.888 "dma_device_type": 2 00:09:23.888 } 00:09:23.888 ], 00:09:23.888 "driver_specific": {} 00:09:23.888 } 00:09:23.888 ] 00:09:23.888 19:07:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.888 19:07:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:23.888 19:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:23.888 19:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:23.888 19:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:09:23.889 19:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:23.889 19:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:23.889 19:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:23.889 19:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:23.889 19:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:23.889 19:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:23.889 19:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:23.889 19:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:23.889 19:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:23.889 19:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:23.889 19:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:23.889 19:07:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.889 19:07:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.889 19:07:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.889 19:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:23.889 "name": "Existed_Raid", 00:09:23.889 "uuid": "9d6d84cf-c5b6-4289-91f2-918bc0b46402", 00:09:23.889 "strip_size_kb": 64, 00:09:23.889 "state": "online", 00:09:23.889 "raid_level": "raid0", 00:09:23.889 "superblock": false, 00:09:23.889 "num_base_bdevs": 3, 00:09:23.889 "num_base_bdevs_discovered": 3, 00:09:23.889 "num_base_bdevs_operational": 3, 00:09:23.889 "base_bdevs_list": [ 00:09:23.889 { 00:09:23.889 "name": "BaseBdev1", 00:09:23.889 "uuid": "f32737e4-6ceb-41f7-90b6-845223163223", 00:09:23.889 "is_configured": true, 00:09:23.889 "data_offset": 0, 00:09:23.889 "data_size": 65536 00:09:23.889 }, 00:09:23.889 { 00:09:23.889 "name": "BaseBdev2", 00:09:23.889 "uuid": "6f734dcd-533e-4b0b-9e66-5a3ea9d53c40", 00:09:23.889 "is_configured": true, 00:09:23.889 "data_offset": 0, 00:09:23.889 "data_size": 65536 00:09:23.889 }, 00:09:23.889 { 00:09:23.889 "name": "BaseBdev3", 00:09:23.889 "uuid": "b7f81a24-28e8-4024-9e6e-ab5be4fdb6eb", 00:09:23.889 "is_configured": true, 00:09:23.889 "data_offset": 0, 00:09:23.889 "data_size": 65536 00:09:23.889 } 00:09:23.889 ] 00:09:23.889 }' 00:09:23.889 19:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:23.889 19:07:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.459 19:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:24.459 19:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:24.459 19:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:24.459 19:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:24.459 19:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:24.459 19:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:24.459 19:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:24.459 19:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:24.459 19:07:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.459 19:07:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.459 [2024-11-27 19:07:33.807681] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:24.459 19:07:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.459 19:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:24.459 "name": "Existed_Raid", 00:09:24.459 "aliases": [ 00:09:24.459 "9d6d84cf-c5b6-4289-91f2-918bc0b46402" 00:09:24.459 ], 00:09:24.459 "product_name": "Raid Volume", 00:09:24.459 "block_size": 512, 00:09:24.459 "num_blocks": 196608, 00:09:24.459 "uuid": "9d6d84cf-c5b6-4289-91f2-918bc0b46402", 00:09:24.459 "assigned_rate_limits": { 00:09:24.459 "rw_ios_per_sec": 0, 00:09:24.459 "rw_mbytes_per_sec": 0, 00:09:24.459 "r_mbytes_per_sec": 0, 00:09:24.459 "w_mbytes_per_sec": 0 00:09:24.459 }, 00:09:24.459 "claimed": false, 00:09:24.459 "zoned": false, 00:09:24.459 "supported_io_types": { 00:09:24.459 "read": true, 00:09:24.459 "write": true, 00:09:24.459 "unmap": true, 00:09:24.459 "flush": true, 00:09:24.459 "reset": true, 00:09:24.459 "nvme_admin": false, 00:09:24.459 "nvme_io": false, 00:09:24.459 "nvme_io_md": false, 00:09:24.459 "write_zeroes": true, 00:09:24.459 "zcopy": false, 00:09:24.459 "get_zone_info": false, 00:09:24.459 "zone_management": false, 00:09:24.459 "zone_append": false, 00:09:24.459 "compare": false, 00:09:24.459 "compare_and_write": false, 00:09:24.459 "abort": false, 00:09:24.459 "seek_hole": false, 00:09:24.459 "seek_data": false, 00:09:24.459 "copy": false, 00:09:24.459 "nvme_iov_md": false 00:09:24.459 }, 00:09:24.459 "memory_domains": [ 00:09:24.459 { 00:09:24.459 "dma_device_id": "system", 00:09:24.459 "dma_device_type": 1 00:09:24.459 }, 00:09:24.459 { 00:09:24.459 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:24.459 "dma_device_type": 2 00:09:24.459 }, 00:09:24.459 { 00:09:24.459 "dma_device_id": "system", 00:09:24.459 "dma_device_type": 1 00:09:24.459 }, 00:09:24.459 { 00:09:24.459 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:24.459 "dma_device_type": 2 00:09:24.459 }, 00:09:24.459 { 00:09:24.459 "dma_device_id": "system", 00:09:24.459 "dma_device_type": 1 00:09:24.459 }, 00:09:24.459 { 00:09:24.459 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:24.459 "dma_device_type": 2 00:09:24.459 } 00:09:24.459 ], 00:09:24.459 "driver_specific": { 00:09:24.459 "raid": { 00:09:24.459 "uuid": "9d6d84cf-c5b6-4289-91f2-918bc0b46402", 00:09:24.459 "strip_size_kb": 64, 00:09:24.459 "state": "online", 00:09:24.459 "raid_level": "raid0", 00:09:24.459 "superblock": false, 00:09:24.459 "num_base_bdevs": 3, 00:09:24.459 "num_base_bdevs_discovered": 3, 00:09:24.459 "num_base_bdevs_operational": 3, 00:09:24.459 "base_bdevs_list": [ 00:09:24.459 { 00:09:24.459 "name": "BaseBdev1", 00:09:24.459 "uuid": "f32737e4-6ceb-41f7-90b6-845223163223", 00:09:24.459 "is_configured": true, 00:09:24.459 "data_offset": 0, 00:09:24.459 "data_size": 65536 00:09:24.459 }, 00:09:24.459 { 00:09:24.459 "name": "BaseBdev2", 00:09:24.459 "uuid": "6f734dcd-533e-4b0b-9e66-5a3ea9d53c40", 00:09:24.459 "is_configured": true, 00:09:24.459 "data_offset": 0, 00:09:24.459 "data_size": 65536 00:09:24.459 }, 00:09:24.459 { 00:09:24.459 "name": "BaseBdev3", 00:09:24.459 "uuid": "b7f81a24-28e8-4024-9e6e-ab5be4fdb6eb", 00:09:24.459 "is_configured": true, 00:09:24.459 "data_offset": 0, 00:09:24.459 "data_size": 65536 00:09:24.459 } 00:09:24.459 ] 00:09:24.459 } 00:09:24.459 } 00:09:24.459 }' 00:09:24.459 19:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:24.459 19:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:24.459 BaseBdev2 00:09:24.459 BaseBdev3' 00:09:24.459 19:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:24.459 19:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:24.459 19:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:24.459 19:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:24.459 19:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:24.459 19:07:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.459 19:07:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.459 19:07:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.459 19:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:24.459 19:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:24.459 19:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:24.459 19:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:24.459 19:07:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.459 19:07:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.459 19:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:24.459 19:07:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.459 19:07:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:24.460 19:07:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:24.460 19:07:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:24.460 19:07:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:24.460 19:07:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:24.460 19:07:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.460 19:07:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.460 19:07:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.460 19:07:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:24.460 19:07:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:24.460 19:07:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:24.460 19:07:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.460 19:07:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.460 [2024-11-27 19:07:34.066934] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:24.460 [2024-11-27 19:07:34.066967] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:24.460 [2024-11-27 19:07:34.067045] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:24.719 19:07:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.719 19:07:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:24.719 19:07:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:09:24.719 19:07:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:24.719 19:07:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:24.719 19:07:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:24.719 19:07:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:09:24.719 19:07:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:24.719 19:07:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:24.719 19:07:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:24.719 19:07:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:24.719 19:07:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:24.719 19:07:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:24.719 19:07:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:24.719 19:07:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:24.719 19:07:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:24.719 19:07:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:24.719 19:07:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:24.719 19:07:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.719 19:07:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.719 19:07:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.719 19:07:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:24.719 "name": "Existed_Raid", 00:09:24.719 "uuid": "9d6d84cf-c5b6-4289-91f2-918bc0b46402", 00:09:24.719 "strip_size_kb": 64, 00:09:24.719 "state": "offline", 00:09:24.719 "raid_level": "raid0", 00:09:24.719 "superblock": false, 00:09:24.719 "num_base_bdevs": 3, 00:09:24.719 "num_base_bdevs_discovered": 2, 00:09:24.719 "num_base_bdevs_operational": 2, 00:09:24.719 "base_bdevs_list": [ 00:09:24.719 { 00:09:24.719 "name": null, 00:09:24.719 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:24.719 "is_configured": false, 00:09:24.719 "data_offset": 0, 00:09:24.720 "data_size": 65536 00:09:24.720 }, 00:09:24.720 { 00:09:24.720 "name": "BaseBdev2", 00:09:24.720 "uuid": "6f734dcd-533e-4b0b-9e66-5a3ea9d53c40", 00:09:24.720 "is_configured": true, 00:09:24.720 "data_offset": 0, 00:09:24.720 "data_size": 65536 00:09:24.720 }, 00:09:24.720 { 00:09:24.720 "name": "BaseBdev3", 00:09:24.720 "uuid": "b7f81a24-28e8-4024-9e6e-ab5be4fdb6eb", 00:09:24.720 "is_configured": true, 00:09:24.720 "data_offset": 0, 00:09:24.720 "data_size": 65536 00:09:24.720 } 00:09:24.720 ] 00:09:24.720 }' 00:09:24.720 19:07:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:24.720 19:07:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.979 19:07:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:24.979 19:07:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:24.979 19:07:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:24.979 19:07:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:24.979 19:07:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.979 19:07:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.979 19:07:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.979 19:07:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:24.979 19:07:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:24.979 19:07:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:24.979 19:07:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.979 19:07:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.979 [2024-11-27 19:07:34.613013] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:25.238 19:07:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.238 19:07:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:25.238 19:07:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:25.238 19:07:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:25.239 19:07:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:25.239 19:07:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.239 19:07:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.239 19:07:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.239 19:07:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:25.239 19:07:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:25.239 19:07:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:25.239 19:07:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.239 19:07:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.239 [2024-11-27 19:07:34.770744] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:25.239 [2024-11-27 19:07:34.770821] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:25.498 19:07:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.498 19:07:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:25.498 19:07:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:25.498 19:07:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:25.498 19:07:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:25.498 19:07:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.498 19:07:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.498 19:07:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.498 19:07:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:25.498 19:07:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:25.498 19:07:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:25.498 19:07:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:25.498 19:07:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:25.498 19:07:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:25.498 19:07:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.498 19:07:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.498 BaseBdev2 00:09:25.498 19:07:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.498 19:07:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:25.498 19:07:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:25.498 19:07:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:25.498 19:07:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:25.498 19:07:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:25.498 19:07:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:25.498 19:07:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:25.498 19:07:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.498 19:07:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.498 19:07:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.498 19:07:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:25.498 19:07:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.498 19:07:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.498 [ 00:09:25.498 { 00:09:25.498 "name": "BaseBdev2", 00:09:25.498 "aliases": [ 00:09:25.498 "13c2cde4-85d8-4a61-bd38-9217af673719" 00:09:25.498 ], 00:09:25.498 "product_name": "Malloc disk", 00:09:25.498 "block_size": 512, 00:09:25.498 "num_blocks": 65536, 00:09:25.498 "uuid": "13c2cde4-85d8-4a61-bd38-9217af673719", 00:09:25.498 "assigned_rate_limits": { 00:09:25.498 "rw_ios_per_sec": 0, 00:09:25.498 "rw_mbytes_per_sec": 0, 00:09:25.498 "r_mbytes_per_sec": 0, 00:09:25.498 "w_mbytes_per_sec": 0 00:09:25.498 }, 00:09:25.498 "claimed": false, 00:09:25.498 "zoned": false, 00:09:25.498 "supported_io_types": { 00:09:25.498 "read": true, 00:09:25.498 "write": true, 00:09:25.498 "unmap": true, 00:09:25.498 "flush": true, 00:09:25.498 "reset": true, 00:09:25.498 "nvme_admin": false, 00:09:25.498 "nvme_io": false, 00:09:25.498 "nvme_io_md": false, 00:09:25.498 "write_zeroes": true, 00:09:25.498 "zcopy": true, 00:09:25.498 "get_zone_info": false, 00:09:25.498 "zone_management": false, 00:09:25.498 "zone_append": false, 00:09:25.498 "compare": false, 00:09:25.498 "compare_and_write": false, 00:09:25.498 "abort": true, 00:09:25.498 "seek_hole": false, 00:09:25.498 "seek_data": false, 00:09:25.498 "copy": true, 00:09:25.498 "nvme_iov_md": false 00:09:25.498 }, 00:09:25.498 "memory_domains": [ 00:09:25.498 { 00:09:25.498 "dma_device_id": "system", 00:09:25.498 "dma_device_type": 1 00:09:25.498 }, 00:09:25.498 { 00:09:25.498 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:25.498 "dma_device_type": 2 00:09:25.498 } 00:09:25.498 ], 00:09:25.498 "driver_specific": {} 00:09:25.498 } 00:09:25.498 ] 00:09:25.498 19:07:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.498 19:07:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:25.498 19:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:25.498 19:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:25.498 19:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:25.498 19:07:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.498 19:07:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.498 BaseBdev3 00:09:25.498 19:07:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.498 19:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:25.498 19:07:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:25.498 19:07:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:25.498 19:07:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:25.498 19:07:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:25.498 19:07:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:25.498 19:07:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:25.498 19:07:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.498 19:07:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.498 19:07:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.498 19:07:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:25.498 19:07:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.498 19:07:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.499 [ 00:09:25.499 { 00:09:25.499 "name": "BaseBdev3", 00:09:25.499 "aliases": [ 00:09:25.499 "30fd6eac-b0f4-432f-85b8-808332bc8729" 00:09:25.499 ], 00:09:25.499 "product_name": "Malloc disk", 00:09:25.499 "block_size": 512, 00:09:25.499 "num_blocks": 65536, 00:09:25.499 "uuid": "30fd6eac-b0f4-432f-85b8-808332bc8729", 00:09:25.499 "assigned_rate_limits": { 00:09:25.499 "rw_ios_per_sec": 0, 00:09:25.499 "rw_mbytes_per_sec": 0, 00:09:25.499 "r_mbytes_per_sec": 0, 00:09:25.499 "w_mbytes_per_sec": 0 00:09:25.499 }, 00:09:25.499 "claimed": false, 00:09:25.499 "zoned": false, 00:09:25.499 "supported_io_types": { 00:09:25.499 "read": true, 00:09:25.499 "write": true, 00:09:25.499 "unmap": true, 00:09:25.499 "flush": true, 00:09:25.499 "reset": true, 00:09:25.499 "nvme_admin": false, 00:09:25.499 "nvme_io": false, 00:09:25.499 "nvme_io_md": false, 00:09:25.499 "write_zeroes": true, 00:09:25.499 "zcopy": true, 00:09:25.499 "get_zone_info": false, 00:09:25.499 "zone_management": false, 00:09:25.499 "zone_append": false, 00:09:25.499 "compare": false, 00:09:25.499 "compare_and_write": false, 00:09:25.499 "abort": true, 00:09:25.499 "seek_hole": false, 00:09:25.499 "seek_data": false, 00:09:25.499 "copy": true, 00:09:25.499 "nvme_iov_md": false 00:09:25.499 }, 00:09:25.499 "memory_domains": [ 00:09:25.499 { 00:09:25.499 "dma_device_id": "system", 00:09:25.499 "dma_device_type": 1 00:09:25.499 }, 00:09:25.499 { 00:09:25.499 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:25.499 "dma_device_type": 2 00:09:25.499 } 00:09:25.499 ], 00:09:25.499 "driver_specific": {} 00:09:25.499 } 00:09:25.499 ] 00:09:25.499 19:07:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.499 19:07:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:25.499 19:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:25.499 19:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:25.499 19:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:25.499 19:07:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.499 19:07:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.499 [2024-11-27 19:07:35.101928] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:25.499 [2024-11-27 19:07:35.102048] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:25.499 [2024-11-27 19:07:35.102098] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:25.499 [2024-11-27 19:07:35.104321] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:25.499 19:07:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.499 19:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:25.499 19:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:25.499 19:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:25.499 19:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:25.499 19:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:25.499 19:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:25.499 19:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:25.499 19:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:25.499 19:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:25.499 19:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:25.499 19:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:25.499 19:07:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.499 19:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:25.499 19:07:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.757 19:07:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.757 19:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:25.757 "name": "Existed_Raid", 00:09:25.757 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:25.757 "strip_size_kb": 64, 00:09:25.757 "state": "configuring", 00:09:25.757 "raid_level": "raid0", 00:09:25.757 "superblock": false, 00:09:25.757 "num_base_bdevs": 3, 00:09:25.757 "num_base_bdevs_discovered": 2, 00:09:25.757 "num_base_bdevs_operational": 3, 00:09:25.757 "base_bdevs_list": [ 00:09:25.757 { 00:09:25.757 "name": "BaseBdev1", 00:09:25.757 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:25.757 "is_configured": false, 00:09:25.757 "data_offset": 0, 00:09:25.757 "data_size": 0 00:09:25.757 }, 00:09:25.757 { 00:09:25.757 "name": "BaseBdev2", 00:09:25.757 "uuid": "13c2cde4-85d8-4a61-bd38-9217af673719", 00:09:25.757 "is_configured": true, 00:09:25.757 "data_offset": 0, 00:09:25.757 "data_size": 65536 00:09:25.757 }, 00:09:25.757 { 00:09:25.757 "name": "BaseBdev3", 00:09:25.757 "uuid": "30fd6eac-b0f4-432f-85b8-808332bc8729", 00:09:25.757 "is_configured": true, 00:09:25.757 "data_offset": 0, 00:09:25.757 "data_size": 65536 00:09:25.757 } 00:09:25.757 ] 00:09:25.757 }' 00:09:25.757 19:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:25.757 19:07:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.016 19:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:26.016 19:07:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.016 19:07:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.016 [2024-11-27 19:07:35.537224] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:26.017 19:07:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.017 19:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:26.017 19:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:26.017 19:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:26.017 19:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:26.017 19:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:26.017 19:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:26.017 19:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:26.017 19:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:26.017 19:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:26.017 19:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:26.017 19:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:26.017 19:07:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.017 19:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:26.017 19:07:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.017 19:07:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.017 19:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:26.017 "name": "Existed_Raid", 00:09:26.017 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:26.017 "strip_size_kb": 64, 00:09:26.017 "state": "configuring", 00:09:26.017 "raid_level": "raid0", 00:09:26.017 "superblock": false, 00:09:26.017 "num_base_bdevs": 3, 00:09:26.017 "num_base_bdevs_discovered": 1, 00:09:26.017 "num_base_bdevs_operational": 3, 00:09:26.017 "base_bdevs_list": [ 00:09:26.017 { 00:09:26.017 "name": "BaseBdev1", 00:09:26.017 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:26.017 "is_configured": false, 00:09:26.017 "data_offset": 0, 00:09:26.017 "data_size": 0 00:09:26.017 }, 00:09:26.017 { 00:09:26.017 "name": null, 00:09:26.017 "uuid": "13c2cde4-85d8-4a61-bd38-9217af673719", 00:09:26.017 "is_configured": false, 00:09:26.017 "data_offset": 0, 00:09:26.017 "data_size": 65536 00:09:26.017 }, 00:09:26.017 { 00:09:26.017 "name": "BaseBdev3", 00:09:26.017 "uuid": "30fd6eac-b0f4-432f-85b8-808332bc8729", 00:09:26.017 "is_configured": true, 00:09:26.017 "data_offset": 0, 00:09:26.017 "data_size": 65536 00:09:26.017 } 00:09:26.017 ] 00:09:26.017 }' 00:09:26.017 19:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:26.017 19:07:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.586 19:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:26.586 19:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:26.586 19:07:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.586 19:07:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.586 19:07:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.586 19:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:26.586 19:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:26.586 19:07:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.586 19:07:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.586 [2024-11-27 19:07:36.062792] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:26.586 BaseBdev1 00:09:26.586 19:07:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.586 19:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:26.586 19:07:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:26.586 19:07:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:26.586 19:07:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:26.586 19:07:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:26.586 19:07:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:26.586 19:07:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:26.586 19:07:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.586 19:07:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.586 19:07:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.586 19:07:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:26.586 19:07:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.586 19:07:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.586 [ 00:09:26.586 { 00:09:26.586 "name": "BaseBdev1", 00:09:26.586 "aliases": [ 00:09:26.586 "90bbd1ce-b92c-45db-afaa-96a791c6989c" 00:09:26.586 ], 00:09:26.586 "product_name": "Malloc disk", 00:09:26.586 "block_size": 512, 00:09:26.586 "num_blocks": 65536, 00:09:26.586 "uuid": "90bbd1ce-b92c-45db-afaa-96a791c6989c", 00:09:26.586 "assigned_rate_limits": { 00:09:26.586 "rw_ios_per_sec": 0, 00:09:26.586 "rw_mbytes_per_sec": 0, 00:09:26.586 "r_mbytes_per_sec": 0, 00:09:26.586 "w_mbytes_per_sec": 0 00:09:26.586 }, 00:09:26.586 "claimed": true, 00:09:26.586 "claim_type": "exclusive_write", 00:09:26.586 "zoned": false, 00:09:26.586 "supported_io_types": { 00:09:26.586 "read": true, 00:09:26.586 "write": true, 00:09:26.586 "unmap": true, 00:09:26.586 "flush": true, 00:09:26.586 "reset": true, 00:09:26.586 "nvme_admin": false, 00:09:26.586 "nvme_io": false, 00:09:26.586 "nvme_io_md": false, 00:09:26.586 "write_zeroes": true, 00:09:26.586 "zcopy": true, 00:09:26.586 "get_zone_info": false, 00:09:26.586 "zone_management": false, 00:09:26.586 "zone_append": false, 00:09:26.586 "compare": false, 00:09:26.586 "compare_and_write": false, 00:09:26.586 "abort": true, 00:09:26.586 "seek_hole": false, 00:09:26.586 "seek_data": false, 00:09:26.586 "copy": true, 00:09:26.586 "nvme_iov_md": false 00:09:26.586 }, 00:09:26.586 "memory_domains": [ 00:09:26.586 { 00:09:26.586 "dma_device_id": "system", 00:09:26.586 "dma_device_type": 1 00:09:26.586 }, 00:09:26.586 { 00:09:26.586 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:26.586 "dma_device_type": 2 00:09:26.586 } 00:09:26.586 ], 00:09:26.586 "driver_specific": {} 00:09:26.586 } 00:09:26.586 ] 00:09:26.586 19:07:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.586 19:07:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:26.586 19:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:26.587 19:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:26.587 19:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:26.587 19:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:26.587 19:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:26.587 19:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:26.587 19:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:26.587 19:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:26.587 19:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:26.587 19:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:26.587 19:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:26.587 19:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:26.587 19:07:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.587 19:07:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.587 19:07:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.587 19:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:26.587 "name": "Existed_Raid", 00:09:26.587 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:26.587 "strip_size_kb": 64, 00:09:26.587 "state": "configuring", 00:09:26.587 "raid_level": "raid0", 00:09:26.587 "superblock": false, 00:09:26.587 "num_base_bdevs": 3, 00:09:26.587 "num_base_bdevs_discovered": 2, 00:09:26.587 "num_base_bdevs_operational": 3, 00:09:26.587 "base_bdevs_list": [ 00:09:26.587 { 00:09:26.587 "name": "BaseBdev1", 00:09:26.587 "uuid": "90bbd1ce-b92c-45db-afaa-96a791c6989c", 00:09:26.587 "is_configured": true, 00:09:26.587 "data_offset": 0, 00:09:26.587 "data_size": 65536 00:09:26.587 }, 00:09:26.587 { 00:09:26.587 "name": null, 00:09:26.587 "uuid": "13c2cde4-85d8-4a61-bd38-9217af673719", 00:09:26.587 "is_configured": false, 00:09:26.587 "data_offset": 0, 00:09:26.587 "data_size": 65536 00:09:26.587 }, 00:09:26.587 { 00:09:26.587 "name": "BaseBdev3", 00:09:26.587 "uuid": "30fd6eac-b0f4-432f-85b8-808332bc8729", 00:09:26.587 "is_configured": true, 00:09:26.587 "data_offset": 0, 00:09:26.587 "data_size": 65536 00:09:26.587 } 00:09:26.587 ] 00:09:26.587 }' 00:09:26.587 19:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:26.587 19:07:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.155 19:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:27.155 19:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:27.155 19:07:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.155 19:07:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.155 19:07:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.155 19:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:27.155 19:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:27.155 19:07:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.155 19:07:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.155 [2024-11-27 19:07:36.609879] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:27.155 19:07:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.155 19:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:27.155 19:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:27.155 19:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:27.155 19:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:27.155 19:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:27.155 19:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:27.155 19:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:27.155 19:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:27.155 19:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:27.155 19:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:27.155 19:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:27.155 19:07:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.155 19:07:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.155 19:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:27.155 19:07:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.155 19:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:27.155 "name": "Existed_Raid", 00:09:27.155 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:27.155 "strip_size_kb": 64, 00:09:27.155 "state": "configuring", 00:09:27.155 "raid_level": "raid0", 00:09:27.155 "superblock": false, 00:09:27.155 "num_base_bdevs": 3, 00:09:27.155 "num_base_bdevs_discovered": 1, 00:09:27.155 "num_base_bdevs_operational": 3, 00:09:27.155 "base_bdevs_list": [ 00:09:27.155 { 00:09:27.155 "name": "BaseBdev1", 00:09:27.155 "uuid": "90bbd1ce-b92c-45db-afaa-96a791c6989c", 00:09:27.155 "is_configured": true, 00:09:27.155 "data_offset": 0, 00:09:27.155 "data_size": 65536 00:09:27.155 }, 00:09:27.155 { 00:09:27.155 "name": null, 00:09:27.155 "uuid": "13c2cde4-85d8-4a61-bd38-9217af673719", 00:09:27.155 "is_configured": false, 00:09:27.155 "data_offset": 0, 00:09:27.155 "data_size": 65536 00:09:27.155 }, 00:09:27.155 { 00:09:27.155 "name": null, 00:09:27.155 "uuid": "30fd6eac-b0f4-432f-85b8-808332bc8729", 00:09:27.155 "is_configured": false, 00:09:27.155 "data_offset": 0, 00:09:27.155 "data_size": 65536 00:09:27.155 } 00:09:27.155 ] 00:09:27.155 }' 00:09:27.155 19:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:27.155 19:07:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.723 19:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:27.723 19:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:27.723 19:07:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.723 19:07:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.723 19:07:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.723 19:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:27.723 19:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:27.723 19:07:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.723 19:07:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.723 [2024-11-27 19:07:37.113083] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:27.723 19:07:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.723 19:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:27.723 19:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:27.723 19:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:27.723 19:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:27.723 19:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:27.723 19:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:27.723 19:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:27.723 19:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:27.723 19:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:27.723 19:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:27.723 19:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:27.723 19:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:27.723 19:07:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.723 19:07:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.723 19:07:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.723 19:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:27.723 "name": "Existed_Raid", 00:09:27.723 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:27.723 "strip_size_kb": 64, 00:09:27.723 "state": "configuring", 00:09:27.723 "raid_level": "raid0", 00:09:27.723 "superblock": false, 00:09:27.723 "num_base_bdevs": 3, 00:09:27.723 "num_base_bdevs_discovered": 2, 00:09:27.723 "num_base_bdevs_operational": 3, 00:09:27.723 "base_bdevs_list": [ 00:09:27.723 { 00:09:27.723 "name": "BaseBdev1", 00:09:27.723 "uuid": "90bbd1ce-b92c-45db-afaa-96a791c6989c", 00:09:27.723 "is_configured": true, 00:09:27.723 "data_offset": 0, 00:09:27.723 "data_size": 65536 00:09:27.723 }, 00:09:27.723 { 00:09:27.723 "name": null, 00:09:27.723 "uuid": "13c2cde4-85d8-4a61-bd38-9217af673719", 00:09:27.723 "is_configured": false, 00:09:27.723 "data_offset": 0, 00:09:27.723 "data_size": 65536 00:09:27.723 }, 00:09:27.723 { 00:09:27.723 "name": "BaseBdev3", 00:09:27.723 "uuid": "30fd6eac-b0f4-432f-85b8-808332bc8729", 00:09:27.723 "is_configured": true, 00:09:27.723 "data_offset": 0, 00:09:27.723 "data_size": 65536 00:09:27.723 } 00:09:27.723 ] 00:09:27.723 }' 00:09:27.724 19:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:27.724 19:07:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.982 19:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:27.982 19:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:27.982 19:07:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.982 19:07:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.982 19:07:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.242 19:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:28.242 19:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:28.242 19:07:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.242 19:07:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.242 [2024-11-27 19:07:37.632192] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:28.242 19:07:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.242 19:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:28.242 19:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:28.242 19:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:28.242 19:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:28.242 19:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:28.242 19:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:28.242 19:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:28.242 19:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:28.242 19:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:28.242 19:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:28.242 19:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:28.242 19:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:28.242 19:07:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.242 19:07:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.242 19:07:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.242 19:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:28.242 "name": "Existed_Raid", 00:09:28.242 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:28.242 "strip_size_kb": 64, 00:09:28.242 "state": "configuring", 00:09:28.242 "raid_level": "raid0", 00:09:28.242 "superblock": false, 00:09:28.242 "num_base_bdevs": 3, 00:09:28.242 "num_base_bdevs_discovered": 1, 00:09:28.242 "num_base_bdevs_operational": 3, 00:09:28.242 "base_bdevs_list": [ 00:09:28.242 { 00:09:28.242 "name": null, 00:09:28.242 "uuid": "90bbd1ce-b92c-45db-afaa-96a791c6989c", 00:09:28.242 "is_configured": false, 00:09:28.242 "data_offset": 0, 00:09:28.242 "data_size": 65536 00:09:28.242 }, 00:09:28.242 { 00:09:28.242 "name": null, 00:09:28.242 "uuid": "13c2cde4-85d8-4a61-bd38-9217af673719", 00:09:28.242 "is_configured": false, 00:09:28.242 "data_offset": 0, 00:09:28.242 "data_size": 65536 00:09:28.242 }, 00:09:28.242 { 00:09:28.242 "name": "BaseBdev3", 00:09:28.242 "uuid": "30fd6eac-b0f4-432f-85b8-808332bc8729", 00:09:28.242 "is_configured": true, 00:09:28.242 "data_offset": 0, 00:09:28.242 "data_size": 65536 00:09:28.242 } 00:09:28.242 ] 00:09:28.242 }' 00:09:28.242 19:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:28.242 19:07:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.820 19:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:28.820 19:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:28.820 19:07:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.820 19:07:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.820 19:07:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.820 19:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:28.820 19:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:28.820 19:07:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.820 19:07:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.820 [2024-11-27 19:07:38.247213] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:28.820 19:07:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.820 19:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:28.820 19:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:28.820 19:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:28.820 19:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:28.820 19:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:28.820 19:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:28.820 19:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:28.820 19:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:28.820 19:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:28.820 19:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:28.820 19:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:28.820 19:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:28.820 19:07:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.820 19:07:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.820 19:07:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.820 19:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:28.820 "name": "Existed_Raid", 00:09:28.820 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:28.820 "strip_size_kb": 64, 00:09:28.820 "state": "configuring", 00:09:28.820 "raid_level": "raid0", 00:09:28.820 "superblock": false, 00:09:28.820 "num_base_bdevs": 3, 00:09:28.820 "num_base_bdevs_discovered": 2, 00:09:28.820 "num_base_bdevs_operational": 3, 00:09:28.820 "base_bdevs_list": [ 00:09:28.820 { 00:09:28.820 "name": null, 00:09:28.820 "uuid": "90bbd1ce-b92c-45db-afaa-96a791c6989c", 00:09:28.820 "is_configured": false, 00:09:28.820 "data_offset": 0, 00:09:28.820 "data_size": 65536 00:09:28.820 }, 00:09:28.820 { 00:09:28.820 "name": "BaseBdev2", 00:09:28.820 "uuid": "13c2cde4-85d8-4a61-bd38-9217af673719", 00:09:28.820 "is_configured": true, 00:09:28.820 "data_offset": 0, 00:09:28.820 "data_size": 65536 00:09:28.820 }, 00:09:28.820 { 00:09:28.820 "name": "BaseBdev3", 00:09:28.821 "uuid": "30fd6eac-b0f4-432f-85b8-808332bc8729", 00:09:28.821 "is_configured": true, 00:09:28.821 "data_offset": 0, 00:09:28.821 "data_size": 65536 00:09:28.821 } 00:09:28.821 ] 00:09:28.821 }' 00:09:28.821 19:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:28.821 19:07:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.110 19:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:29.110 19:07:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.110 19:07:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.110 19:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:29.110 19:07:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.110 19:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:29.110 19:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:29.110 19:07:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.110 19:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:29.110 19:07:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.369 19:07:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.369 19:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 90bbd1ce-b92c-45db-afaa-96a791c6989c 00:09:29.369 19:07:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.369 19:07:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.369 [2024-11-27 19:07:38.825453] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:29.369 [2024-11-27 19:07:38.825504] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:29.369 [2024-11-27 19:07:38.825515] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:09:29.369 [2024-11-27 19:07:38.825802] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:29.369 [2024-11-27 19:07:38.826002] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:29.369 [2024-11-27 19:07:38.826012] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:09:29.369 [2024-11-27 19:07:38.826290] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:29.369 NewBaseBdev 00:09:29.369 19:07:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.369 19:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:29.369 19:07:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:09:29.369 19:07:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:29.369 19:07:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:29.369 19:07:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:29.369 19:07:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:29.369 19:07:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:29.369 19:07:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.369 19:07:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.369 19:07:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.369 19:07:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:29.369 19:07:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.369 19:07:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.369 [ 00:09:29.369 { 00:09:29.369 "name": "NewBaseBdev", 00:09:29.369 "aliases": [ 00:09:29.369 "90bbd1ce-b92c-45db-afaa-96a791c6989c" 00:09:29.369 ], 00:09:29.369 "product_name": "Malloc disk", 00:09:29.369 "block_size": 512, 00:09:29.369 "num_blocks": 65536, 00:09:29.369 "uuid": "90bbd1ce-b92c-45db-afaa-96a791c6989c", 00:09:29.369 "assigned_rate_limits": { 00:09:29.369 "rw_ios_per_sec": 0, 00:09:29.369 "rw_mbytes_per_sec": 0, 00:09:29.369 "r_mbytes_per_sec": 0, 00:09:29.369 "w_mbytes_per_sec": 0 00:09:29.369 }, 00:09:29.369 "claimed": true, 00:09:29.369 "claim_type": "exclusive_write", 00:09:29.369 "zoned": false, 00:09:29.369 "supported_io_types": { 00:09:29.369 "read": true, 00:09:29.369 "write": true, 00:09:29.370 "unmap": true, 00:09:29.370 "flush": true, 00:09:29.370 "reset": true, 00:09:29.370 "nvme_admin": false, 00:09:29.370 "nvme_io": false, 00:09:29.370 "nvme_io_md": false, 00:09:29.370 "write_zeroes": true, 00:09:29.370 "zcopy": true, 00:09:29.370 "get_zone_info": false, 00:09:29.370 "zone_management": false, 00:09:29.370 "zone_append": false, 00:09:29.370 "compare": false, 00:09:29.370 "compare_and_write": false, 00:09:29.370 "abort": true, 00:09:29.370 "seek_hole": false, 00:09:29.370 "seek_data": false, 00:09:29.370 "copy": true, 00:09:29.370 "nvme_iov_md": false 00:09:29.370 }, 00:09:29.370 "memory_domains": [ 00:09:29.370 { 00:09:29.370 "dma_device_id": "system", 00:09:29.370 "dma_device_type": 1 00:09:29.370 }, 00:09:29.370 { 00:09:29.370 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:29.370 "dma_device_type": 2 00:09:29.370 } 00:09:29.370 ], 00:09:29.370 "driver_specific": {} 00:09:29.370 } 00:09:29.370 ] 00:09:29.370 19:07:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.370 19:07:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:29.370 19:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:09:29.370 19:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:29.370 19:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:29.370 19:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:29.370 19:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:29.370 19:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:29.370 19:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:29.370 19:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:29.370 19:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:29.370 19:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:29.370 19:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:29.370 19:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:29.370 19:07:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.370 19:07:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.370 19:07:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.370 19:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:29.370 "name": "Existed_Raid", 00:09:29.370 "uuid": "eabf04d4-7371-4638-8426-021c630ef6fa", 00:09:29.370 "strip_size_kb": 64, 00:09:29.370 "state": "online", 00:09:29.370 "raid_level": "raid0", 00:09:29.370 "superblock": false, 00:09:29.370 "num_base_bdevs": 3, 00:09:29.370 "num_base_bdevs_discovered": 3, 00:09:29.370 "num_base_bdevs_operational": 3, 00:09:29.370 "base_bdevs_list": [ 00:09:29.370 { 00:09:29.370 "name": "NewBaseBdev", 00:09:29.370 "uuid": "90bbd1ce-b92c-45db-afaa-96a791c6989c", 00:09:29.370 "is_configured": true, 00:09:29.370 "data_offset": 0, 00:09:29.370 "data_size": 65536 00:09:29.370 }, 00:09:29.370 { 00:09:29.370 "name": "BaseBdev2", 00:09:29.370 "uuid": "13c2cde4-85d8-4a61-bd38-9217af673719", 00:09:29.370 "is_configured": true, 00:09:29.370 "data_offset": 0, 00:09:29.370 "data_size": 65536 00:09:29.370 }, 00:09:29.370 { 00:09:29.370 "name": "BaseBdev3", 00:09:29.370 "uuid": "30fd6eac-b0f4-432f-85b8-808332bc8729", 00:09:29.370 "is_configured": true, 00:09:29.370 "data_offset": 0, 00:09:29.370 "data_size": 65536 00:09:29.370 } 00:09:29.370 ] 00:09:29.370 }' 00:09:29.370 19:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:29.370 19:07:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.939 19:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:29.939 19:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:29.939 19:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:29.939 19:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:29.939 19:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:29.939 19:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:29.939 19:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:29.939 19:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:29.939 19:07:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.939 19:07:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.939 [2024-11-27 19:07:39.332974] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:29.939 19:07:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.939 19:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:29.939 "name": "Existed_Raid", 00:09:29.939 "aliases": [ 00:09:29.939 "eabf04d4-7371-4638-8426-021c630ef6fa" 00:09:29.939 ], 00:09:29.939 "product_name": "Raid Volume", 00:09:29.939 "block_size": 512, 00:09:29.939 "num_blocks": 196608, 00:09:29.939 "uuid": "eabf04d4-7371-4638-8426-021c630ef6fa", 00:09:29.939 "assigned_rate_limits": { 00:09:29.939 "rw_ios_per_sec": 0, 00:09:29.939 "rw_mbytes_per_sec": 0, 00:09:29.939 "r_mbytes_per_sec": 0, 00:09:29.939 "w_mbytes_per_sec": 0 00:09:29.939 }, 00:09:29.939 "claimed": false, 00:09:29.939 "zoned": false, 00:09:29.939 "supported_io_types": { 00:09:29.939 "read": true, 00:09:29.939 "write": true, 00:09:29.939 "unmap": true, 00:09:29.939 "flush": true, 00:09:29.939 "reset": true, 00:09:29.939 "nvme_admin": false, 00:09:29.939 "nvme_io": false, 00:09:29.939 "nvme_io_md": false, 00:09:29.939 "write_zeroes": true, 00:09:29.939 "zcopy": false, 00:09:29.939 "get_zone_info": false, 00:09:29.939 "zone_management": false, 00:09:29.939 "zone_append": false, 00:09:29.939 "compare": false, 00:09:29.939 "compare_and_write": false, 00:09:29.939 "abort": false, 00:09:29.939 "seek_hole": false, 00:09:29.939 "seek_data": false, 00:09:29.939 "copy": false, 00:09:29.939 "nvme_iov_md": false 00:09:29.939 }, 00:09:29.939 "memory_domains": [ 00:09:29.939 { 00:09:29.939 "dma_device_id": "system", 00:09:29.939 "dma_device_type": 1 00:09:29.939 }, 00:09:29.940 { 00:09:29.940 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:29.940 "dma_device_type": 2 00:09:29.940 }, 00:09:29.940 { 00:09:29.940 "dma_device_id": "system", 00:09:29.940 "dma_device_type": 1 00:09:29.940 }, 00:09:29.940 { 00:09:29.940 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:29.940 "dma_device_type": 2 00:09:29.940 }, 00:09:29.940 { 00:09:29.940 "dma_device_id": "system", 00:09:29.940 "dma_device_type": 1 00:09:29.940 }, 00:09:29.940 { 00:09:29.940 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:29.940 "dma_device_type": 2 00:09:29.940 } 00:09:29.940 ], 00:09:29.940 "driver_specific": { 00:09:29.940 "raid": { 00:09:29.940 "uuid": "eabf04d4-7371-4638-8426-021c630ef6fa", 00:09:29.940 "strip_size_kb": 64, 00:09:29.940 "state": "online", 00:09:29.940 "raid_level": "raid0", 00:09:29.940 "superblock": false, 00:09:29.940 "num_base_bdevs": 3, 00:09:29.940 "num_base_bdevs_discovered": 3, 00:09:29.940 "num_base_bdevs_operational": 3, 00:09:29.940 "base_bdevs_list": [ 00:09:29.940 { 00:09:29.940 "name": "NewBaseBdev", 00:09:29.940 "uuid": "90bbd1ce-b92c-45db-afaa-96a791c6989c", 00:09:29.940 "is_configured": true, 00:09:29.940 "data_offset": 0, 00:09:29.940 "data_size": 65536 00:09:29.940 }, 00:09:29.940 { 00:09:29.940 "name": "BaseBdev2", 00:09:29.940 "uuid": "13c2cde4-85d8-4a61-bd38-9217af673719", 00:09:29.940 "is_configured": true, 00:09:29.940 "data_offset": 0, 00:09:29.940 "data_size": 65536 00:09:29.940 }, 00:09:29.940 { 00:09:29.940 "name": "BaseBdev3", 00:09:29.940 "uuid": "30fd6eac-b0f4-432f-85b8-808332bc8729", 00:09:29.940 "is_configured": true, 00:09:29.940 "data_offset": 0, 00:09:29.940 "data_size": 65536 00:09:29.940 } 00:09:29.940 ] 00:09:29.940 } 00:09:29.940 } 00:09:29.940 }' 00:09:29.940 19:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:29.940 19:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:29.940 BaseBdev2 00:09:29.940 BaseBdev3' 00:09:29.940 19:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:29.940 19:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:29.940 19:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:29.940 19:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:29.940 19:07:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.940 19:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:29.940 19:07:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.940 19:07:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.940 19:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:29.940 19:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:29.940 19:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:29.940 19:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:29.940 19:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:29.940 19:07:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.940 19:07:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.940 19:07:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.940 19:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:29.940 19:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:29.940 19:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:29.940 19:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:29.940 19:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:29.940 19:07:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.940 19:07:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.201 19:07:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.201 19:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:30.201 19:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:30.201 19:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:30.201 19:07:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.201 19:07:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.201 [2024-11-27 19:07:39.612110] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:30.201 [2024-11-27 19:07:39.612143] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:30.201 [2024-11-27 19:07:39.612228] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:30.201 [2024-11-27 19:07:39.612288] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:30.201 [2024-11-27 19:07:39.612302] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:09:30.201 19:07:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.201 19:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 63915 00:09:30.201 19:07:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 63915 ']' 00:09:30.201 19:07:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 63915 00:09:30.201 19:07:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:09:30.201 19:07:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:30.201 19:07:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63915 00:09:30.201 19:07:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:30.201 19:07:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:30.201 19:07:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63915' 00:09:30.201 killing process with pid 63915 00:09:30.201 19:07:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 63915 00:09:30.201 [2024-11-27 19:07:39.649163] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:30.201 19:07:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 63915 00:09:30.461 [2024-11-27 19:07:39.967547] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:31.843 19:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:09:31.843 00:09:31.843 real 0m10.870s 00:09:31.843 user 0m17.061s 00:09:31.843 sys 0m2.060s 00:09:31.843 19:07:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:31.843 ************************************ 00:09:31.843 END TEST raid_state_function_test 00:09:31.843 ************************************ 00:09:31.843 19:07:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.844 19:07:41 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:09:31.844 19:07:41 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:31.844 19:07:41 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:31.844 19:07:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:31.844 ************************************ 00:09:31.844 START TEST raid_state_function_test_sb 00:09:31.844 ************************************ 00:09:31.844 19:07:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 3 true 00:09:31.844 19:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:09:31.844 19:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:31.844 19:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:09:31.844 19:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:31.844 19:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:31.844 19:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:31.844 19:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:31.844 19:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:31.844 19:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:31.844 19:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:31.844 19:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:31.844 19:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:31.844 19:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:31.844 19:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:31.844 19:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:31.844 19:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:31.844 19:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:31.844 19:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:31.844 19:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:31.844 19:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:31.844 19:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:31.844 19:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:09:31.844 19:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:31.844 19:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:31.844 19:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:09:31.844 19:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:09:31.844 19:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=64542 00:09:31.844 19:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:31.844 19:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 64542' 00:09:31.844 Process raid pid: 64542 00:09:31.844 19:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 64542 00:09:31.844 19:07:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 64542 ']' 00:09:31.844 19:07:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:31.844 19:07:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:31.844 19:07:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:31.844 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:31.844 19:07:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:31.844 19:07:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.844 [2024-11-27 19:07:41.347044] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:09:31.844 [2024-11-27 19:07:41.347302] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:32.104 [2024-11-27 19:07:41.527652] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:32.104 [2024-11-27 19:07:41.662359] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:32.364 [2024-11-27 19:07:41.896061] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:32.364 [2024-11-27 19:07:41.896106] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:32.624 19:07:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:32.624 19:07:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:09:32.624 19:07:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:32.624 19:07:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.624 19:07:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.624 [2024-11-27 19:07:42.177487] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:32.624 [2024-11-27 19:07:42.177550] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:32.624 [2024-11-27 19:07:42.177567] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:32.624 [2024-11-27 19:07:42.177577] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:32.624 [2024-11-27 19:07:42.177583] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:32.624 [2024-11-27 19:07:42.177593] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:32.624 19:07:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.624 19:07:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:32.624 19:07:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:32.624 19:07:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:32.624 19:07:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:32.624 19:07:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:32.624 19:07:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:32.624 19:07:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:32.624 19:07:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:32.624 19:07:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:32.624 19:07:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:32.624 19:07:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:32.624 19:07:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:32.624 19:07:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.624 19:07:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.624 19:07:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.624 19:07:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:32.624 "name": "Existed_Raid", 00:09:32.624 "uuid": "d2880fb5-7fc7-4d26-8432-8808fb410b98", 00:09:32.624 "strip_size_kb": 64, 00:09:32.624 "state": "configuring", 00:09:32.624 "raid_level": "raid0", 00:09:32.624 "superblock": true, 00:09:32.624 "num_base_bdevs": 3, 00:09:32.624 "num_base_bdevs_discovered": 0, 00:09:32.624 "num_base_bdevs_operational": 3, 00:09:32.624 "base_bdevs_list": [ 00:09:32.625 { 00:09:32.625 "name": "BaseBdev1", 00:09:32.625 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:32.625 "is_configured": false, 00:09:32.625 "data_offset": 0, 00:09:32.625 "data_size": 0 00:09:32.625 }, 00:09:32.625 { 00:09:32.625 "name": "BaseBdev2", 00:09:32.625 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:32.625 "is_configured": false, 00:09:32.625 "data_offset": 0, 00:09:32.625 "data_size": 0 00:09:32.625 }, 00:09:32.625 { 00:09:32.625 "name": "BaseBdev3", 00:09:32.625 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:32.625 "is_configured": false, 00:09:32.625 "data_offset": 0, 00:09:32.625 "data_size": 0 00:09:32.625 } 00:09:32.625 ] 00:09:32.625 }' 00:09:32.625 19:07:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:32.625 19:07:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.194 19:07:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:33.194 19:07:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.194 19:07:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.194 [2024-11-27 19:07:42.660567] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:33.194 [2024-11-27 19:07:42.660653] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:33.194 19:07:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.194 19:07:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:33.194 19:07:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.194 19:07:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.194 [2024-11-27 19:07:42.672564] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:33.194 [2024-11-27 19:07:42.672645] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:33.194 [2024-11-27 19:07:42.672673] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:33.194 [2024-11-27 19:07:42.672687] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:33.194 [2024-11-27 19:07:42.672706] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:33.194 [2024-11-27 19:07:42.672716] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:33.194 19:07:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.194 19:07:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:33.194 19:07:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.194 19:07:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.194 [2024-11-27 19:07:42.726597] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:33.194 BaseBdev1 00:09:33.194 19:07:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.194 19:07:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:33.194 19:07:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:33.194 19:07:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:33.195 19:07:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:33.195 19:07:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:33.195 19:07:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:33.195 19:07:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:33.195 19:07:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.195 19:07:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.195 19:07:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.195 19:07:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:33.195 19:07:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.195 19:07:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.195 [ 00:09:33.195 { 00:09:33.195 "name": "BaseBdev1", 00:09:33.195 "aliases": [ 00:09:33.195 "04212fc5-058d-47f0-accf-7929a4ec1d17" 00:09:33.195 ], 00:09:33.195 "product_name": "Malloc disk", 00:09:33.195 "block_size": 512, 00:09:33.195 "num_blocks": 65536, 00:09:33.195 "uuid": "04212fc5-058d-47f0-accf-7929a4ec1d17", 00:09:33.195 "assigned_rate_limits": { 00:09:33.195 "rw_ios_per_sec": 0, 00:09:33.195 "rw_mbytes_per_sec": 0, 00:09:33.195 "r_mbytes_per_sec": 0, 00:09:33.195 "w_mbytes_per_sec": 0 00:09:33.195 }, 00:09:33.195 "claimed": true, 00:09:33.195 "claim_type": "exclusive_write", 00:09:33.195 "zoned": false, 00:09:33.195 "supported_io_types": { 00:09:33.195 "read": true, 00:09:33.195 "write": true, 00:09:33.195 "unmap": true, 00:09:33.195 "flush": true, 00:09:33.195 "reset": true, 00:09:33.195 "nvme_admin": false, 00:09:33.195 "nvme_io": false, 00:09:33.195 "nvme_io_md": false, 00:09:33.195 "write_zeroes": true, 00:09:33.195 "zcopy": true, 00:09:33.195 "get_zone_info": false, 00:09:33.195 "zone_management": false, 00:09:33.195 "zone_append": false, 00:09:33.195 "compare": false, 00:09:33.195 "compare_and_write": false, 00:09:33.195 "abort": true, 00:09:33.195 "seek_hole": false, 00:09:33.195 "seek_data": false, 00:09:33.195 "copy": true, 00:09:33.195 "nvme_iov_md": false 00:09:33.195 }, 00:09:33.195 "memory_domains": [ 00:09:33.195 { 00:09:33.195 "dma_device_id": "system", 00:09:33.195 "dma_device_type": 1 00:09:33.195 }, 00:09:33.195 { 00:09:33.195 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:33.195 "dma_device_type": 2 00:09:33.195 } 00:09:33.195 ], 00:09:33.195 "driver_specific": {} 00:09:33.195 } 00:09:33.195 ] 00:09:33.195 19:07:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.195 19:07:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:33.195 19:07:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:33.195 19:07:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:33.195 19:07:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:33.195 19:07:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:33.195 19:07:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:33.195 19:07:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:33.195 19:07:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:33.195 19:07:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:33.195 19:07:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:33.195 19:07:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:33.195 19:07:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:33.195 19:07:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.195 19:07:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.195 19:07:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:33.195 19:07:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.195 19:07:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:33.195 "name": "Existed_Raid", 00:09:33.195 "uuid": "6b58f94a-1071-4110-bd76-a88ce21a9564", 00:09:33.195 "strip_size_kb": 64, 00:09:33.195 "state": "configuring", 00:09:33.195 "raid_level": "raid0", 00:09:33.195 "superblock": true, 00:09:33.195 "num_base_bdevs": 3, 00:09:33.195 "num_base_bdevs_discovered": 1, 00:09:33.195 "num_base_bdevs_operational": 3, 00:09:33.195 "base_bdevs_list": [ 00:09:33.195 { 00:09:33.195 "name": "BaseBdev1", 00:09:33.195 "uuid": "04212fc5-058d-47f0-accf-7929a4ec1d17", 00:09:33.195 "is_configured": true, 00:09:33.195 "data_offset": 2048, 00:09:33.195 "data_size": 63488 00:09:33.195 }, 00:09:33.195 { 00:09:33.195 "name": "BaseBdev2", 00:09:33.195 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:33.195 "is_configured": false, 00:09:33.195 "data_offset": 0, 00:09:33.195 "data_size": 0 00:09:33.195 }, 00:09:33.195 { 00:09:33.195 "name": "BaseBdev3", 00:09:33.195 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:33.195 "is_configured": false, 00:09:33.195 "data_offset": 0, 00:09:33.195 "data_size": 0 00:09:33.195 } 00:09:33.195 ] 00:09:33.195 }' 00:09:33.195 19:07:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:33.195 19:07:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.765 19:07:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:33.765 19:07:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.765 19:07:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.765 [2024-11-27 19:07:43.185841] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:33.765 [2024-11-27 19:07:43.185944] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:33.765 19:07:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.765 19:07:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:33.765 19:07:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.765 19:07:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.765 [2024-11-27 19:07:43.197884] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:33.765 [2024-11-27 19:07:43.200029] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:33.765 [2024-11-27 19:07:43.200134] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:33.765 [2024-11-27 19:07:43.200166] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:33.765 [2024-11-27 19:07:43.200189] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:33.765 19:07:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.765 19:07:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:33.765 19:07:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:33.765 19:07:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:33.765 19:07:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:33.765 19:07:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:33.765 19:07:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:33.765 19:07:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:33.765 19:07:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:33.765 19:07:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:33.765 19:07:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:33.765 19:07:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:33.765 19:07:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:33.765 19:07:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:33.765 19:07:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.765 19:07:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:33.765 19:07:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.765 19:07:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.765 19:07:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:33.765 "name": "Existed_Raid", 00:09:33.765 "uuid": "dbc1c490-32d7-40b2-926e-c3fabbb6ffcd", 00:09:33.765 "strip_size_kb": 64, 00:09:33.765 "state": "configuring", 00:09:33.765 "raid_level": "raid0", 00:09:33.765 "superblock": true, 00:09:33.765 "num_base_bdevs": 3, 00:09:33.765 "num_base_bdevs_discovered": 1, 00:09:33.765 "num_base_bdevs_operational": 3, 00:09:33.765 "base_bdevs_list": [ 00:09:33.765 { 00:09:33.765 "name": "BaseBdev1", 00:09:33.766 "uuid": "04212fc5-058d-47f0-accf-7929a4ec1d17", 00:09:33.766 "is_configured": true, 00:09:33.766 "data_offset": 2048, 00:09:33.766 "data_size": 63488 00:09:33.766 }, 00:09:33.766 { 00:09:33.766 "name": "BaseBdev2", 00:09:33.766 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:33.766 "is_configured": false, 00:09:33.766 "data_offset": 0, 00:09:33.766 "data_size": 0 00:09:33.766 }, 00:09:33.766 { 00:09:33.766 "name": "BaseBdev3", 00:09:33.766 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:33.766 "is_configured": false, 00:09:33.766 "data_offset": 0, 00:09:33.766 "data_size": 0 00:09:33.766 } 00:09:33.766 ] 00:09:33.766 }' 00:09:33.766 19:07:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:33.766 19:07:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.025 19:07:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:34.025 19:07:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.025 19:07:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.285 [2024-11-27 19:07:43.700933] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:34.285 BaseBdev2 00:09:34.285 19:07:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.285 19:07:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:34.285 19:07:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:34.285 19:07:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:34.285 19:07:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:34.285 19:07:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:34.285 19:07:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:34.285 19:07:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:34.285 19:07:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.285 19:07:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.285 19:07:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.285 19:07:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:34.285 19:07:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.285 19:07:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.285 [ 00:09:34.285 { 00:09:34.285 "name": "BaseBdev2", 00:09:34.285 "aliases": [ 00:09:34.285 "982d93a7-bd3c-4d78-af74-47012d6afc82" 00:09:34.285 ], 00:09:34.285 "product_name": "Malloc disk", 00:09:34.285 "block_size": 512, 00:09:34.285 "num_blocks": 65536, 00:09:34.285 "uuid": "982d93a7-bd3c-4d78-af74-47012d6afc82", 00:09:34.286 "assigned_rate_limits": { 00:09:34.286 "rw_ios_per_sec": 0, 00:09:34.286 "rw_mbytes_per_sec": 0, 00:09:34.286 "r_mbytes_per_sec": 0, 00:09:34.286 "w_mbytes_per_sec": 0 00:09:34.286 }, 00:09:34.286 "claimed": true, 00:09:34.286 "claim_type": "exclusive_write", 00:09:34.286 "zoned": false, 00:09:34.286 "supported_io_types": { 00:09:34.286 "read": true, 00:09:34.286 "write": true, 00:09:34.286 "unmap": true, 00:09:34.286 "flush": true, 00:09:34.286 "reset": true, 00:09:34.286 "nvme_admin": false, 00:09:34.286 "nvme_io": false, 00:09:34.286 "nvme_io_md": false, 00:09:34.286 "write_zeroes": true, 00:09:34.286 "zcopy": true, 00:09:34.286 "get_zone_info": false, 00:09:34.286 "zone_management": false, 00:09:34.286 "zone_append": false, 00:09:34.286 "compare": false, 00:09:34.286 "compare_and_write": false, 00:09:34.286 "abort": true, 00:09:34.286 "seek_hole": false, 00:09:34.286 "seek_data": false, 00:09:34.286 "copy": true, 00:09:34.286 "nvme_iov_md": false 00:09:34.286 }, 00:09:34.286 "memory_domains": [ 00:09:34.286 { 00:09:34.286 "dma_device_id": "system", 00:09:34.286 "dma_device_type": 1 00:09:34.286 }, 00:09:34.286 { 00:09:34.286 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:34.286 "dma_device_type": 2 00:09:34.286 } 00:09:34.286 ], 00:09:34.286 "driver_specific": {} 00:09:34.286 } 00:09:34.286 ] 00:09:34.286 19:07:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.286 19:07:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:34.286 19:07:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:34.286 19:07:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:34.286 19:07:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:34.286 19:07:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:34.286 19:07:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:34.286 19:07:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:34.286 19:07:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:34.286 19:07:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:34.286 19:07:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:34.286 19:07:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:34.286 19:07:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:34.286 19:07:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:34.286 19:07:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:34.286 19:07:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:34.286 19:07:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.286 19:07:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.286 19:07:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.286 19:07:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:34.286 "name": "Existed_Raid", 00:09:34.286 "uuid": "dbc1c490-32d7-40b2-926e-c3fabbb6ffcd", 00:09:34.286 "strip_size_kb": 64, 00:09:34.286 "state": "configuring", 00:09:34.286 "raid_level": "raid0", 00:09:34.286 "superblock": true, 00:09:34.286 "num_base_bdevs": 3, 00:09:34.286 "num_base_bdevs_discovered": 2, 00:09:34.286 "num_base_bdevs_operational": 3, 00:09:34.286 "base_bdevs_list": [ 00:09:34.286 { 00:09:34.286 "name": "BaseBdev1", 00:09:34.286 "uuid": "04212fc5-058d-47f0-accf-7929a4ec1d17", 00:09:34.286 "is_configured": true, 00:09:34.286 "data_offset": 2048, 00:09:34.286 "data_size": 63488 00:09:34.286 }, 00:09:34.286 { 00:09:34.286 "name": "BaseBdev2", 00:09:34.286 "uuid": "982d93a7-bd3c-4d78-af74-47012d6afc82", 00:09:34.286 "is_configured": true, 00:09:34.286 "data_offset": 2048, 00:09:34.286 "data_size": 63488 00:09:34.286 }, 00:09:34.286 { 00:09:34.286 "name": "BaseBdev3", 00:09:34.286 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:34.286 "is_configured": false, 00:09:34.286 "data_offset": 0, 00:09:34.286 "data_size": 0 00:09:34.286 } 00:09:34.286 ] 00:09:34.286 }' 00:09:34.286 19:07:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:34.286 19:07:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.546 19:07:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:34.546 19:07:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.546 19:07:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.806 [2024-11-27 19:07:44.237108] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:34.806 [2024-11-27 19:07:44.237491] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:34.806 [2024-11-27 19:07:44.237555] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:34.806 BaseBdev3 00:09:34.806 [2024-11-27 19:07:44.238075] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:34.806 [2024-11-27 19:07:44.238305] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:34.806 [2024-11-27 19:07:44.238349] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:34.806 19:07:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.806 19:07:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:34.806 [2024-11-27 19:07:44.238545] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:34.806 19:07:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:34.806 19:07:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:34.806 19:07:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:34.806 19:07:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:34.806 19:07:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:34.806 19:07:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:34.806 19:07:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.806 19:07:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.806 19:07:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.806 19:07:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:34.806 19:07:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.806 19:07:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.806 [ 00:09:34.806 { 00:09:34.806 "name": "BaseBdev3", 00:09:34.806 "aliases": [ 00:09:34.806 "222d2425-0e6a-46c2-a473-5afa3d998832" 00:09:34.806 ], 00:09:34.806 "product_name": "Malloc disk", 00:09:34.806 "block_size": 512, 00:09:34.806 "num_blocks": 65536, 00:09:34.806 "uuid": "222d2425-0e6a-46c2-a473-5afa3d998832", 00:09:34.806 "assigned_rate_limits": { 00:09:34.806 "rw_ios_per_sec": 0, 00:09:34.806 "rw_mbytes_per_sec": 0, 00:09:34.806 "r_mbytes_per_sec": 0, 00:09:34.806 "w_mbytes_per_sec": 0 00:09:34.806 }, 00:09:34.806 "claimed": true, 00:09:34.806 "claim_type": "exclusive_write", 00:09:34.806 "zoned": false, 00:09:34.806 "supported_io_types": { 00:09:34.806 "read": true, 00:09:34.806 "write": true, 00:09:34.806 "unmap": true, 00:09:34.806 "flush": true, 00:09:34.806 "reset": true, 00:09:34.806 "nvme_admin": false, 00:09:34.806 "nvme_io": false, 00:09:34.806 "nvme_io_md": false, 00:09:34.806 "write_zeroes": true, 00:09:34.806 "zcopy": true, 00:09:34.806 "get_zone_info": false, 00:09:34.806 "zone_management": false, 00:09:34.806 "zone_append": false, 00:09:34.806 "compare": false, 00:09:34.806 "compare_and_write": false, 00:09:34.806 "abort": true, 00:09:34.806 "seek_hole": false, 00:09:34.806 "seek_data": false, 00:09:34.806 "copy": true, 00:09:34.806 "nvme_iov_md": false 00:09:34.806 }, 00:09:34.806 "memory_domains": [ 00:09:34.806 { 00:09:34.806 "dma_device_id": "system", 00:09:34.806 "dma_device_type": 1 00:09:34.806 }, 00:09:34.806 { 00:09:34.806 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:34.807 "dma_device_type": 2 00:09:34.807 } 00:09:34.807 ], 00:09:34.807 "driver_specific": {} 00:09:34.807 } 00:09:34.807 ] 00:09:34.807 19:07:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.807 19:07:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:34.807 19:07:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:34.807 19:07:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:34.807 19:07:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:09:34.807 19:07:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:34.807 19:07:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:34.807 19:07:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:34.807 19:07:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:34.807 19:07:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:34.807 19:07:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:34.807 19:07:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:34.807 19:07:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:34.807 19:07:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:34.807 19:07:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:34.807 19:07:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.807 19:07:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.807 19:07:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:34.807 19:07:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.807 19:07:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:34.807 "name": "Existed_Raid", 00:09:34.807 "uuid": "dbc1c490-32d7-40b2-926e-c3fabbb6ffcd", 00:09:34.807 "strip_size_kb": 64, 00:09:34.807 "state": "online", 00:09:34.807 "raid_level": "raid0", 00:09:34.807 "superblock": true, 00:09:34.807 "num_base_bdevs": 3, 00:09:34.807 "num_base_bdevs_discovered": 3, 00:09:34.807 "num_base_bdevs_operational": 3, 00:09:34.807 "base_bdevs_list": [ 00:09:34.807 { 00:09:34.807 "name": "BaseBdev1", 00:09:34.807 "uuid": "04212fc5-058d-47f0-accf-7929a4ec1d17", 00:09:34.807 "is_configured": true, 00:09:34.807 "data_offset": 2048, 00:09:34.807 "data_size": 63488 00:09:34.807 }, 00:09:34.807 { 00:09:34.807 "name": "BaseBdev2", 00:09:34.807 "uuid": "982d93a7-bd3c-4d78-af74-47012d6afc82", 00:09:34.807 "is_configured": true, 00:09:34.807 "data_offset": 2048, 00:09:34.807 "data_size": 63488 00:09:34.807 }, 00:09:34.807 { 00:09:34.807 "name": "BaseBdev3", 00:09:34.807 "uuid": "222d2425-0e6a-46c2-a473-5afa3d998832", 00:09:34.807 "is_configured": true, 00:09:34.807 "data_offset": 2048, 00:09:34.807 "data_size": 63488 00:09:34.807 } 00:09:34.807 ] 00:09:34.807 }' 00:09:34.807 19:07:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:34.807 19:07:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.066 19:07:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:35.066 19:07:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:35.066 19:07:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:35.066 19:07:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:35.066 19:07:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:35.066 19:07:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:35.066 19:07:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:35.066 19:07:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:35.066 19:07:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.066 19:07:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.066 [2024-11-27 19:07:44.696667] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:35.326 19:07:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.326 19:07:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:35.326 "name": "Existed_Raid", 00:09:35.326 "aliases": [ 00:09:35.326 "dbc1c490-32d7-40b2-926e-c3fabbb6ffcd" 00:09:35.326 ], 00:09:35.326 "product_name": "Raid Volume", 00:09:35.326 "block_size": 512, 00:09:35.326 "num_blocks": 190464, 00:09:35.326 "uuid": "dbc1c490-32d7-40b2-926e-c3fabbb6ffcd", 00:09:35.326 "assigned_rate_limits": { 00:09:35.326 "rw_ios_per_sec": 0, 00:09:35.326 "rw_mbytes_per_sec": 0, 00:09:35.326 "r_mbytes_per_sec": 0, 00:09:35.326 "w_mbytes_per_sec": 0 00:09:35.326 }, 00:09:35.326 "claimed": false, 00:09:35.326 "zoned": false, 00:09:35.326 "supported_io_types": { 00:09:35.326 "read": true, 00:09:35.326 "write": true, 00:09:35.326 "unmap": true, 00:09:35.326 "flush": true, 00:09:35.326 "reset": true, 00:09:35.326 "nvme_admin": false, 00:09:35.326 "nvme_io": false, 00:09:35.326 "nvme_io_md": false, 00:09:35.326 "write_zeroes": true, 00:09:35.326 "zcopy": false, 00:09:35.326 "get_zone_info": false, 00:09:35.326 "zone_management": false, 00:09:35.326 "zone_append": false, 00:09:35.326 "compare": false, 00:09:35.326 "compare_and_write": false, 00:09:35.326 "abort": false, 00:09:35.326 "seek_hole": false, 00:09:35.326 "seek_data": false, 00:09:35.326 "copy": false, 00:09:35.326 "nvme_iov_md": false 00:09:35.326 }, 00:09:35.326 "memory_domains": [ 00:09:35.326 { 00:09:35.326 "dma_device_id": "system", 00:09:35.326 "dma_device_type": 1 00:09:35.326 }, 00:09:35.326 { 00:09:35.326 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:35.326 "dma_device_type": 2 00:09:35.326 }, 00:09:35.326 { 00:09:35.326 "dma_device_id": "system", 00:09:35.326 "dma_device_type": 1 00:09:35.326 }, 00:09:35.326 { 00:09:35.326 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:35.326 "dma_device_type": 2 00:09:35.326 }, 00:09:35.326 { 00:09:35.326 "dma_device_id": "system", 00:09:35.326 "dma_device_type": 1 00:09:35.326 }, 00:09:35.326 { 00:09:35.326 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:35.326 "dma_device_type": 2 00:09:35.326 } 00:09:35.326 ], 00:09:35.326 "driver_specific": { 00:09:35.326 "raid": { 00:09:35.326 "uuid": "dbc1c490-32d7-40b2-926e-c3fabbb6ffcd", 00:09:35.326 "strip_size_kb": 64, 00:09:35.326 "state": "online", 00:09:35.326 "raid_level": "raid0", 00:09:35.326 "superblock": true, 00:09:35.326 "num_base_bdevs": 3, 00:09:35.326 "num_base_bdevs_discovered": 3, 00:09:35.326 "num_base_bdevs_operational": 3, 00:09:35.326 "base_bdevs_list": [ 00:09:35.326 { 00:09:35.326 "name": "BaseBdev1", 00:09:35.326 "uuid": "04212fc5-058d-47f0-accf-7929a4ec1d17", 00:09:35.326 "is_configured": true, 00:09:35.326 "data_offset": 2048, 00:09:35.326 "data_size": 63488 00:09:35.326 }, 00:09:35.326 { 00:09:35.326 "name": "BaseBdev2", 00:09:35.326 "uuid": "982d93a7-bd3c-4d78-af74-47012d6afc82", 00:09:35.327 "is_configured": true, 00:09:35.327 "data_offset": 2048, 00:09:35.327 "data_size": 63488 00:09:35.327 }, 00:09:35.327 { 00:09:35.327 "name": "BaseBdev3", 00:09:35.327 "uuid": "222d2425-0e6a-46c2-a473-5afa3d998832", 00:09:35.327 "is_configured": true, 00:09:35.327 "data_offset": 2048, 00:09:35.327 "data_size": 63488 00:09:35.327 } 00:09:35.327 ] 00:09:35.327 } 00:09:35.327 } 00:09:35.327 }' 00:09:35.327 19:07:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:35.327 19:07:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:35.327 BaseBdev2 00:09:35.327 BaseBdev3' 00:09:35.327 19:07:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:35.327 19:07:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:35.327 19:07:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:35.327 19:07:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:35.327 19:07:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.327 19:07:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.327 19:07:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:35.327 19:07:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.327 19:07:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:35.327 19:07:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:35.327 19:07:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:35.327 19:07:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:35.327 19:07:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.327 19:07:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.327 19:07:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:35.327 19:07:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.327 19:07:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:35.327 19:07:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:35.327 19:07:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:35.327 19:07:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:35.327 19:07:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:35.327 19:07:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.327 19:07:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.327 19:07:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.327 19:07:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:35.327 19:07:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:35.327 19:07:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:35.327 19:07:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.327 19:07:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.327 [2024-11-27 19:07:44.951940] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:35.327 [2024-11-27 19:07:44.952011] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:35.327 [2024-11-27 19:07:44.952094] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:35.586 19:07:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.586 19:07:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:35.586 19:07:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:09:35.586 19:07:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:35.586 19:07:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:09:35.586 19:07:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:35.586 19:07:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:09:35.586 19:07:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:35.586 19:07:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:35.586 19:07:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:35.586 19:07:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:35.586 19:07:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:35.586 19:07:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:35.586 19:07:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:35.586 19:07:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:35.586 19:07:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:35.586 19:07:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:35.586 19:07:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:35.586 19:07:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.586 19:07:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.586 19:07:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.586 19:07:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:35.586 "name": "Existed_Raid", 00:09:35.586 "uuid": "dbc1c490-32d7-40b2-926e-c3fabbb6ffcd", 00:09:35.586 "strip_size_kb": 64, 00:09:35.586 "state": "offline", 00:09:35.586 "raid_level": "raid0", 00:09:35.586 "superblock": true, 00:09:35.586 "num_base_bdevs": 3, 00:09:35.586 "num_base_bdevs_discovered": 2, 00:09:35.586 "num_base_bdevs_operational": 2, 00:09:35.586 "base_bdevs_list": [ 00:09:35.586 { 00:09:35.586 "name": null, 00:09:35.586 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:35.586 "is_configured": false, 00:09:35.586 "data_offset": 0, 00:09:35.586 "data_size": 63488 00:09:35.586 }, 00:09:35.586 { 00:09:35.586 "name": "BaseBdev2", 00:09:35.586 "uuid": "982d93a7-bd3c-4d78-af74-47012d6afc82", 00:09:35.586 "is_configured": true, 00:09:35.586 "data_offset": 2048, 00:09:35.586 "data_size": 63488 00:09:35.586 }, 00:09:35.586 { 00:09:35.586 "name": "BaseBdev3", 00:09:35.586 "uuid": "222d2425-0e6a-46c2-a473-5afa3d998832", 00:09:35.586 "is_configured": true, 00:09:35.586 "data_offset": 2048, 00:09:35.586 "data_size": 63488 00:09:35.586 } 00:09:35.586 ] 00:09:35.586 }' 00:09:35.586 19:07:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:35.586 19:07:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.846 19:07:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:35.846 19:07:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:35.846 19:07:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:35.846 19:07:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:35.846 19:07:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.846 19:07:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.846 19:07:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.105 19:07:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:36.105 19:07:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:36.105 19:07:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:36.105 19:07:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.105 19:07:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.105 [2024-11-27 19:07:45.512957] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:36.105 19:07:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.105 19:07:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:36.105 19:07:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:36.105 19:07:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:36.105 19:07:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:36.105 19:07:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.105 19:07:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.105 19:07:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.105 19:07:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:36.105 19:07:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:36.106 19:07:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:36.106 19:07:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.106 19:07:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.106 [2024-11-27 19:07:45.669580] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:36.106 [2024-11-27 19:07:45.669702] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:36.365 19:07:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.365 19:07:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:36.365 19:07:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:36.365 19:07:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:36.365 19:07:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:36.365 19:07:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.365 19:07:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.365 19:07:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.365 19:07:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:36.366 19:07:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:36.366 19:07:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:36.366 19:07:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:36.366 19:07:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:36.366 19:07:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:36.366 19:07:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.366 19:07:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.366 BaseBdev2 00:09:36.366 19:07:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.366 19:07:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:36.366 19:07:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:36.366 19:07:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:36.366 19:07:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:36.366 19:07:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:36.366 19:07:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:36.366 19:07:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:36.366 19:07:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.366 19:07:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.366 19:07:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.366 19:07:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:36.366 19:07:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.366 19:07:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.366 [ 00:09:36.366 { 00:09:36.366 "name": "BaseBdev2", 00:09:36.366 "aliases": [ 00:09:36.366 "f3a48957-79a0-4e84-a7fc-f3aef32265d9" 00:09:36.366 ], 00:09:36.366 "product_name": "Malloc disk", 00:09:36.366 "block_size": 512, 00:09:36.366 "num_blocks": 65536, 00:09:36.366 "uuid": "f3a48957-79a0-4e84-a7fc-f3aef32265d9", 00:09:36.366 "assigned_rate_limits": { 00:09:36.366 "rw_ios_per_sec": 0, 00:09:36.366 "rw_mbytes_per_sec": 0, 00:09:36.366 "r_mbytes_per_sec": 0, 00:09:36.366 "w_mbytes_per_sec": 0 00:09:36.366 }, 00:09:36.366 "claimed": false, 00:09:36.366 "zoned": false, 00:09:36.366 "supported_io_types": { 00:09:36.366 "read": true, 00:09:36.366 "write": true, 00:09:36.366 "unmap": true, 00:09:36.366 "flush": true, 00:09:36.366 "reset": true, 00:09:36.366 "nvme_admin": false, 00:09:36.366 "nvme_io": false, 00:09:36.366 "nvme_io_md": false, 00:09:36.366 "write_zeroes": true, 00:09:36.366 "zcopy": true, 00:09:36.366 "get_zone_info": false, 00:09:36.366 "zone_management": false, 00:09:36.366 "zone_append": false, 00:09:36.366 "compare": false, 00:09:36.366 "compare_and_write": false, 00:09:36.366 "abort": true, 00:09:36.366 "seek_hole": false, 00:09:36.366 "seek_data": false, 00:09:36.366 "copy": true, 00:09:36.366 "nvme_iov_md": false 00:09:36.366 }, 00:09:36.366 "memory_domains": [ 00:09:36.366 { 00:09:36.366 "dma_device_id": "system", 00:09:36.366 "dma_device_type": 1 00:09:36.366 }, 00:09:36.366 { 00:09:36.366 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:36.366 "dma_device_type": 2 00:09:36.366 } 00:09:36.366 ], 00:09:36.366 "driver_specific": {} 00:09:36.366 } 00:09:36.366 ] 00:09:36.366 19:07:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.366 19:07:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:36.366 19:07:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:36.366 19:07:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:36.366 19:07:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:36.366 19:07:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.366 19:07:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.366 BaseBdev3 00:09:36.366 19:07:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.366 19:07:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:36.366 19:07:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:36.366 19:07:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:36.366 19:07:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:36.366 19:07:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:36.366 19:07:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:36.366 19:07:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:36.366 19:07:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.366 19:07:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.366 19:07:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.366 19:07:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:36.366 19:07:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.366 19:07:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.366 [ 00:09:36.366 { 00:09:36.366 "name": "BaseBdev3", 00:09:36.366 "aliases": [ 00:09:36.366 "111a087d-2372-465f-9cbd-1d7cd3396f89" 00:09:36.366 ], 00:09:36.366 "product_name": "Malloc disk", 00:09:36.366 "block_size": 512, 00:09:36.366 "num_blocks": 65536, 00:09:36.366 "uuid": "111a087d-2372-465f-9cbd-1d7cd3396f89", 00:09:36.366 "assigned_rate_limits": { 00:09:36.366 "rw_ios_per_sec": 0, 00:09:36.366 "rw_mbytes_per_sec": 0, 00:09:36.366 "r_mbytes_per_sec": 0, 00:09:36.366 "w_mbytes_per_sec": 0 00:09:36.366 }, 00:09:36.366 "claimed": false, 00:09:36.366 "zoned": false, 00:09:36.366 "supported_io_types": { 00:09:36.366 "read": true, 00:09:36.366 "write": true, 00:09:36.366 "unmap": true, 00:09:36.366 "flush": true, 00:09:36.366 "reset": true, 00:09:36.366 "nvme_admin": false, 00:09:36.366 "nvme_io": false, 00:09:36.366 "nvme_io_md": false, 00:09:36.366 "write_zeroes": true, 00:09:36.366 "zcopy": true, 00:09:36.366 "get_zone_info": false, 00:09:36.366 "zone_management": false, 00:09:36.366 "zone_append": false, 00:09:36.366 "compare": false, 00:09:36.366 "compare_and_write": false, 00:09:36.366 "abort": true, 00:09:36.366 "seek_hole": false, 00:09:36.366 "seek_data": false, 00:09:36.366 "copy": true, 00:09:36.366 "nvme_iov_md": false 00:09:36.366 }, 00:09:36.366 "memory_domains": [ 00:09:36.366 { 00:09:36.366 "dma_device_id": "system", 00:09:36.366 "dma_device_type": 1 00:09:36.366 }, 00:09:36.366 { 00:09:36.366 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:36.366 "dma_device_type": 2 00:09:36.366 } 00:09:36.366 ], 00:09:36.366 "driver_specific": {} 00:09:36.366 } 00:09:36.366 ] 00:09:36.366 19:07:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.366 19:07:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:36.366 19:07:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:36.366 19:07:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:36.366 19:07:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:36.366 19:07:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.366 19:07:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.626 [2024-11-27 19:07:46.001983] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:36.626 [2024-11-27 19:07:46.002073] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:36.626 [2024-11-27 19:07:46.002115] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:36.626 [2024-11-27 19:07:46.004203] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:36.626 19:07:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.626 19:07:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:36.626 19:07:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:36.626 19:07:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:36.626 19:07:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:36.626 19:07:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:36.626 19:07:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:36.626 19:07:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:36.626 19:07:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:36.626 19:07:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:36.626 19:07:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:36.626 19:07:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:36.626 19:07:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.626 19:07:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:36.626 19:07:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.626 19:07:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.626 19:07:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:36.626 "name": "Existed_Raid", 00:09:36.626 "uuid": "6feec714-d326-46aa-8fb2-83aad742eacd", 00:09:36.626 "strip_size_kb": 64, 00:09:36.626 "state": "configuring", 00:09:36.626 "raid_level": "raid0", 00:09:36.626 "superblock": true, 00:09:36.626 "num_base_bdevs": 3, 00:09:36.626 "num_base_bdevs_discovered": 2, 00:09:36.626 "num_base_bdevs_operational": 3, 00:09:36.626 "base_bdevs_list": [ 00:09:36.626 { 00:09:36.626 "name": "BaseBdev1", 00:09:36.626 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:36.626 "is_configured": false, 00:09:36.626 "data_offset": 0, 00:09:36.626 "data_size": 0 00:09:36.626 }, 00:09:36.626 { 00:09:36.626 "name": "BaseBdev2", 00:09:36.626 "uuid": "f3a48957-79a0-4e84-a7fc-f3aef32265d9", 00:09:36.626 "is_configured": true, 00:09:36.626 "data_offset": 2048, 00:09:36.626 "data_size": 63488 00:09:36.626 }, 00:09:36.626 { 00:09:36.626 "name": "BaseBdev3", 00:09:36.626 "uuid": "111a087d-2372-465f-9cbd-1d7cd3396f89", 00:09:36.626 "is_configured": true, 00:09:36.626 "data_offset": 2048, 00:09:36.626 "data_size": 63488 00:09:36.626 } 00:09:36.626 ] 00:09:36.626 }' 00:09:36.626 19:07:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:36.626 19:07:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.886 19:07:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:36.886 19:07:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.886 19:07:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.886 [2024-11-27 19:07:46.469331] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:36.886 19:07:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.886 19:07:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:36.886 19:07:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:36.886 19:07:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:36.886 19:07:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:36.886 19:07:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:36.886 19:07:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:36.886 19:07:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:36.886 19:07:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:36.886 19:07:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:36.886 19:07:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:36.886 19:07:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:36.886 19:07:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:36.886 19:07:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.886 19:07:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.886 19:07:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.145 19:07:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:37.145 "name": "Existed_Raid", 00:09:37.145 "uuid": "6feec714-d326-46aa-8fb2-83aad742eacd", 00:09:37.145 "strip_size_kb": 64, 00:09:37.145 "state": "configuring", 00:09:37.145 "raid_level": "raid0", 00:09:37.145 "superblock": true, 00:09:37.145 "num_base_bdevs": 3, 00:09:37.145 "num_base_bdevs_discovered": 1, 00:09:37.145 "num_base_bdevs_operational": 3, 00:09:37.145 "base_bdevs_list": [ 00:09:37.145 { 00:09:37.145 "name": "BaseBdev1", 00:09:37.145 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:37.145 "is_configured": false, 00:09:37.145 "data_offset": 0, 00:09:37.145 "data_size": 0 00:09:37.145 }, 00:09:37.145 { 00:09:37.145 "name": null, 00:09:37.145 "uuid": "f3a48957-79a0-4e84-a7fc-f3aef32265d9", 00:09:37.145 "is_configured": false, 00:09:37.145 "data_offset": 0, 00:09:37.145 "data_size": 63488 00:09:37.145 }, 00:09:37.145 { 00:09:37.145 "name": "BaseBdev3", 00:09:37.145 "uuid": "111a087d-2372-465f-9cbd-1d7cd3396f89", 00:09:37.145 "is_configured": true, 00:09:37.145 "data_offset": 2048, 00:09:37.145 "data_size": 63488 00:09:37.145 } 00:09:37.145 ] 00:09:37.145 }' 00:09:37.145 19:07:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:37.145 19:07:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.405 19:07:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:37.405 19:07:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.405 19:07:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.405 19:07:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:37.405 19:07:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.405 19:07:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:37.405 19:07:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:37.405 19:07:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.405 19:07:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.405 [2024-11-27 19:07:46.993848] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:37.405 BaseBdev1 00:09:37.405 19:07:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.405 19:07:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:37.405 19:07:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:37.405 19:07:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:37.405 19:07:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:37.405 19:07:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:37.405 19:07:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:37.405 19:07:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:37.405 19:07:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.405 19:07:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.405 19:07:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.405 19:07:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:37.405 19:07:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.405 19:07:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.405 [ 00:09:37.405 { 00:09:37.405 "name": "BaseBdev1", 00:09:37.405 "aliases": [ 00:09:37.405 "cffbf590-e897-40ca-ac4f-3eea63dd2cb4" 00:09:37.405 ], 00:09:37.405 "product_name": "Malloc disk", 00:09:37.405 "block_size": 512, 00:09:37.405 "num_blocks": 65536, 00:09:37.405 "uuid": "cffbf590-e897-40ca-ac4f-3eea63dd2cb4", 00:09:37.405 "assigned_rate_limits": { 00:09:37.405 "rw_ios_per_sec": 0, 00:09:37.405 "rw_mbytes_per_sec": 0, 00:09:37.405 "r_mbytes_per_sec": 0, 00:09:37.405 "w_mbytes_per_sec": 0 00:09:37.405 }, 00:09:37.405 "claimed": true, 00:09:37.405 "claim_type": "exclusive_write", 00:09:37.405 "zoned": false, 00:09:37.405 "supported_io_types": { 00:09:37.405 "read": true, 00:09:37.405 "write": true, 00:09:37.405 "unmap": true, 00:09:37.405 "flush": true, 00:09:37.405 "reset": true, 00:09:37.405 "nvme_admin": false, 00:09:37.405 "nvme_io": false, 00:09:37.405 "nvme_io_md": false, 00:09:37.405 "write_zeroes": true, 00:09:37.405 "zcopy": true, 00:09:37.405 "get_zone_info": false, 00:09:37.405 "zone_management": false, 00:09:37.405 "zone_append": false, 00:09:37.405 "compare": false, 00:09:37.405 "compare_and_write": false, 00:09:37.405 "abort": true, 00:09:37.405 "seek_hole": false, 00:09:37.405 "seek_data": false, 00:09:37.405 "copy": true, 00:09:37.405 "nvme_iov_md": false 00:09:37.405 }, 00:09:37.405 "memory_domains": [ 00:09:37.405 { 00:09:37.406 "dma_device_id": "system", 00:09:37.406 "dma_device_type": 1 00:09:37.406 }, 00:09:37.406 { 00:09:37.406 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:37.406 "dma_device_type": 2 00:09:37.406 } 00:09:37.406 ], 00:09:37.406 "driver_specific": {} 00:09:37.406 } 00:09:37.406 ] 00:09:37.406 19:07:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.406 19:07:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:37.406 19:07:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:37.406 19:07:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:37.406 19:07:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:37.406 19:07:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:37.406 19:07:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:37.406 19:07:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:37.406 19:07:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:37.406 19:07:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:37.406 19:07:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:37.406 19:07:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:37.406 19:07:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:37.406 19:07:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.406 19:07:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.406 19:07:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:37.691 19:07:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.691 19:07:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:37.691 "name": "Existed_Raid", 00:09:37.691 "uuid": "6feec714-d326-46aa-8fb2-83aad742eacd", 00:09:37.691 "strip_size_kb": 64, 00:09:37.691 "state": "configuring", 00:09:37.691 "raid_level": "raid0", 00:09:37.691 "superblock": true, 00:09:37.691 "num_base_bdevs": 3, 00:09:37.691 "num_base_bdevs_discovered": 2, 00:09:37.691 "num_base_bdevs_operational": 3, 00:09:37.691 "base_bdevs_list": [ 00:09:37.691 { 00:09:37.691 "name": "BaseBdev1", 00:09:37.691 "uuid": "cffbf590-e897-40ca-ac4f-3eea63dd2cb4", 00:09:37.691 "is_configured": true, 00:09:37.691 "data_offset": 2048, 00:09:37.691 "data_size": 63488 00:09:37.691 }, 00:09:37.691 { 00:09:37.691 "name": null, 00:09:37.691 "uuid": "f3a48957-79a0-4e84-a7fc-f3aef32265d9", 00:09:37.691 "is_configured": false, 00:09:37.691 "data_offset": 0, 00:09:37.692 "data_size": 63488 00:09:37.692 }, 00:09:37.692 { 00:09:37.692 "name": "BaseBdev3", 00:09:37.692 "uuid": "111a087d-2372-465f-9cbd-1d7cd3396f89", 00:09:37.692 "is_configured": true, 00:09:37.692 "data_offset": 2048, 00:09:37.692 "data_size": 63488 00:09:37.692 } 00:09:37.692 ] 00:09:37.692 }' 00:09:37.692 19:07:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:37.692 19:07:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.967 19:07:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:37.967 19:07:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.967 19:07:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:37.967 19:07:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.967 19:07:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.967 19:07:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:37.967 19:07:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:37.967 19:07:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.967 19:07:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.967 [2024-11-27 19:07:47.528973] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:37.967 19:07:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.967 19:07:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:37.967 19:07:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:37.967 19:07:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:37.967 19:07:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:37.967 19:07:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:37.967 19:07:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:37.967 19:07:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:37.967 19:07:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:37.967 19:07:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:37.967 19:07:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:37.967 19:07:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:37.967 19:07:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:37.967 19:07:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.967 19:07:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.967 19:07:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.967 19:07:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:37.967 "name": "Existed_Raid", 00:09:37.967 "uuid": "6feec714-d326-46aa-8fb2-83aad742eacd", 00:09:37.967 "strip_size_kb": 64, 00:09:37.967 "state": "configuring", 00:09:37.967 "raid_level": "raid0", 00:09:37.967 "superblock": true, 00:09:37.967 "num_base_bdevs": 3, 00:09:37.967 "num_base_bdevs_discovered": 1, 00:09:37.967 "num_base_bdevs_operational": 3, 00:09:37.967 "base_bdevs_list": [ 00:09:37.967 { 00:09:37.967 "name": "BaseBdev1", 00:09:37.967 "uuid": "cffbf590-e897-40ca-ac4f-3eea63dd2cb4", 00:09:37.967 "is_configured": true, 00:09:37.967 "data_offset": 2048, 00:09:37.967 "data_size": 63488 00:09:37.967 }, 00:09:37.967 { 00:09:37.967 "name": null, 00:09:37.967 "uuid": "f3a48957-79a0-4e84-a7fc-f3aef32265d9", 00:09:37.967 "is_configured": false, 00:09:37.967 "data_offset": 0, 00:09:37.967 "data_size": 63488 00:09:37.967 }, 00:09:37.967 { 00:09:37.967 "name": null, 00:09:37.967 "uuid": "111a087d-2372-465f-9cbd-1d7cd3396f89", 00:09:37.967 "is_configured": false, 00:09:37.967 "data_offset": 0, 00:09:37.967 "data_size": 63488 00:09:37.967 } 00:09:37.967 ] 00:09:37.967 }' 00:09:37.967 19:07:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:37.967 19:07:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.537 19:07:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:38.537 19:07:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.537 19:07:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.537 19:07:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:38.537 19:07:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.537 19:07:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:38.537 19:07:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:38.537 19:07:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.537 19:07:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.537 [2024-11-27 19:07:47.996206] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:38.537 19:07:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.537 19:07:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:38.537 19:07:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:38.537 19:07:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:38.537 19:07:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:38.537 19:07:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:38.537 19:07:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:38.537 19:07:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:38.537 19:07:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:38.537 19:07:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:38.537 19:07:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:38.537 19:07:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:38.537 19:07:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:38.537 19:07:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.537 19:07:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.537 19:07:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.537 19:07:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:38.537 "name": "Existed_Raid", 00:09:38.537 "uuid": "6feec714-d326-46aa-8fb2-83aad742eacd", 00:09:38.537 "strip_size_kb": 64, 00:09:38.537 "state": "configuring", 00:09:38.537 "raid_level": "raid0", 00:09:38.537 "superblock": true, 00:09:38.537 "num_base_bdevs": 3, 00:09:38.537 "num_base_bdevs_discovered": 2, 00:09:38.537 "num_base_bdevs_operational": 3, 00:09:38.537 "base_bdevs_list": [ 00:09:38.537 { 00:09:38.537 "name": "BaseBdev1", 00:09:38.537 "uuid": "cffbf590-e897-40ca-ac4f-3eea63dd2cb4", 00:09:38.537 "is_configured": true, 00:09:38.537 "data_offset": 2048, 00:09:38.537 "data_size": 63488 00:09:38.537 }, 00:09:38.537 { 00:09:38.537 "name": null, 00:09:38.537 "uuid": "f3a48957-79a0-4e84-a7fc-f3aef32265d9", 00:09:38.537 "is_configured": false, 00:09:38.537 "data_offset": 0, 00:09:38.537 "data_size": 63488 00:09:38.537 }, 00:09:38.537 { 00:09:38.537 "name": "BaseBdev3", 00:09:38.537 "uuid": "111a087d-2372-465f-9cbd-1d7cd3396f89", 00:09:38.537 "is_configured": true, 00:09:38.537 "data_offset": 2048, 00:09:38.537 "data_size": 63488 00:09:38.537 } 00:09:38.537 ] 00:09:38.537 }' 00:09:38.537 19:07:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:38.537 19:07:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.796 19:07:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:38.796 19:07:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.796 19:07:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.796 19:07:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:39.055 19:07:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.055 19:07:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:39.055 19:07:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:39.055 19:07:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.055 19:07:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.055 [2024-11-27 19:07:48.479427] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:39.055 19:07:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.055 19:07:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:39.055 19:07:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:39.055 19:07:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:39.055 19:07:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:39.055 19:07:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:39.055 19:07:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:39.055 19:07:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:39.055 19:07:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:39.056 19:07:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:39.056 19:07:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:39.056 19:07:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.056 19:07:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.056 19:07:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.056 19:07:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:39.056 19:07:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.056 19:07:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:39.056 "name": "Existed_Raid", 00:09:39.056 "uuid": "6feec714-d326-46aa-8fb2-83aad742eacd", 00:09:39.056 "strip_size_kb": 64, 00:09:39.056 "state": "configuring", 00:09:39.056 "raid_level": "raid0", 00:09:39.056 "superblock": true, 00:09:39.056 "num_base_bdevs": 3, 00:09:39.056 "num_base_bdevs_discovered": 1, 00:09:39.056 "num_base_bdevs_operational": 3, 00:09:39.056 "base_bdevs_list": [ 00:09:39.056 { 00:09:39.056 "name": null, 00:09:39.056 "uuid": "cffbf590-e897-40ca-ac4f-3eea63dd2cb4", 00:09:39.056 "is_configured": false, 00:09:39.056 "data_offset": 0, 00:09:39.056 "data_size": 63488 00:09:39.056 }, 00:09:39.056 { 00:09:39.056 "name": null, 00:09:39.056 "uuid": "f3a48957-79a0-4e84-a7fc-f3aef32265d9", 00:09:39.056 "is_configured": false, 00:09:39.056 "data_offset": 0, 00:09:39.056 "data_size": 63488 00:09:39.056 }, 00:09:39.056 { 00:09:39.056 "name": "BaseBdev3", 00:09:39.056 "uuid": "111a087d-2372-465f-9cbd-1d7cd3396f89", 00:09:39.056 "is_configured": true, 00:09:39.056 "data_offset": 2048, 00:09:39.056 "data_size": 63488 00:09:39.056 } 00:09:39.056 ] 00:09:39.056 }' 00:09:39.056 19:07:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:39.056 19:07:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.624 19:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.624 19:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:39.624 19:07:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.624 19:07:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.624 19:07:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.624 19:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:39.624 19:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:39.624 19:07:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.624 19:07:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.624 [2024-11-27 19:07:49.058716] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:39.624 19:07:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.624 19:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:39.624 19:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:39.624 19:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:39.624 19:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:39.624 19:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:39.624 19:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:39.624 19:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:39.624 19:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:39.624 19:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:39.624 19:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:39.624 19:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.624 19:07:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.624 19:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:39.624 19:07:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.624 19:07:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.624 19:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:39.624 "name": "Existed_Raid", 00:09:39.624 "uuid": "6feec714-d326-46aa-8fb2-83aad742eacd", 00:09:39.624 "strip_size_kb": 64, 00:09:39.624 "state": "configuring", 00:09:39.624 "raid_level": "raid0", 00:09:39.624 "superblock": true, 00:09:39.624 "num_base_bdevs": 3, 00:09:39.624 "num_base_bdevs_discovered": 2, 00:09:39.624 "num_base_bdevs_operational": 3, 00:09:39.624 "base_bdevs_list": [ 00:09:39.624 { 00:09:39.624 "name": null, 00:09:39.624 "uuid": "cffbf590-e897-40ca-ac4f-3eea63dd2cb4", 00:09:39.624 "is_configured": false, 00:09:39.624 "data_offset": 0, 00:09:39.624 "data_size": 63488 00:09:39.624 }, 00:09:39.624 { 00:09:39.624 "name": "BaseBdev2", 00:09:39.624 "uuid": "f3a48957-79a0-4e84-a7fc-f3aef32265d9", 00:09:39.624 "is_configured": true, 00:09:39.624 "data_offset": 2048, 00:09:39.624 "data_size": 63488 00:09:39.624 }, 00:09:39.624 { 00:09:39.624 "name": "BaseBdev3", 00:09:39.624 "uuid": "111a087d-2372-465f-9cbd-1d7cd3396f89", 00:09:39.624 "is_configured": true, 00:09:39.624 "data_offset": 2048, 00:09:39.624 "data_size": 63488 00:09:39.624 } 00:09:39.624 ] 00:09:39.624 }' 00:09:39.624 19:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:39.624 19:07:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.194 19:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:40.194 19:07:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.194 19:07:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.194 19:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:40.194 19:07:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.194 19:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:40.194 19:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:40.194 19:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:40.194 19:07:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.194 19:07:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.194 19:07:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.194 19:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u cffbf590-e897-40ca-ac4f-3eea63dd2cb4 00:09:40.194 19:07:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.194 19:07:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.194 [2024-11-27 19:07:49.640811] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:40.194 [2024-11-27 19:07:49.641082] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:40.194 [2024-11-27 19:07:49.641100] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:40.194 [2024-11-27 19:07:49.641388] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:40.194 NewBaseBdev 00:09:40.194 [2024-11-27 19:07:49.641550] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:40.194 [2024-11-27 19:07:49.641568] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:09:40.194 [2024-11-27 19:07:49.641720] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:40.194 19:07:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.194 19:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:40.194 19:07:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:09:40.194 19:07:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:40.194 19:07:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:40.194 19:07:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:40.194 19:07:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:40.194 19:07:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:40.194 19:07:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.194 19:07:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.194 19:07:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.194 19:07:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:40.194 19:07:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.194 19:07:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.194 [ 00:09:40.194 { 00:09:40.194 "name": "NewBaseBdev", 00:09:40.194 "aliases": [ 00:09:40.194 "cffbf590-e897-40ca-ac4f-3eea63dd2cb4" 00:09:40.194 ], 00:09:40.194 "product_name": "Malloc disk", 00:09:40.194 "block_size": 512, 00:09:40.194 "num_blocks": 65536, 00:09:40.194 "uuid": "cffbf590-e897-40ca-ac4f-3eea63dd2cb4", 00:09:40.194 "assigned_rate_limits": { 00:09:40.194 "rw_ios_per_sec": 0, 00:09:40.194 "rw_mbytes_per_sec": 0, 00:09:40.194 "r_mbytes_per_sec": 0, 00:09:40.194 "w_mbytes_per_sec": 0 00:09:40.194 }, 00:09:40.194 "claimed": true, 00:09:40.194 "claim_type": "exclusive_write", 00:09:40.194 "zoned": false, 00:09:40.194 "supported_io_types": { 00:09:40.194 "read": true, 00:09:40.194 "write": true, 00:09:40.194 "unmap": true, 00:09:40.194 "flush": true, 00:09:40.194 "reset": true, 00:09:40.194 "nvme_admin": false, 00:09:40.194 "nvme_io": false, 00:09:40.194 "nvme_io_md": false, 00:09:40.194 "write_zeroes": true, 00:09:40.194 "zcopy": true, 00:09:40.194 "get_zone_info": false, 00:09:40.194 "zone_management": false, 00:09:40.194 "zone_append": false, 00:09:40.194 "compare": false, 00:09:40.194 "compare_and_write": false, 00:09:40.194 "abort": true, 00:09:40.194 "seek_hole": false, 00:09:40.194 "seek_data": false, 00:09:40.194 "copy": true, 00:09:40.194 "nvme_iov_md": false 00:09:40.194 }, 00:09:40.194 "memory_domains": [ 00:09:40.194 { 00:09:40.194 "dma_device_id": "system", 00:09:40.194 "dma_device_type": 1 00:09:40.194 }, 00:09:40.194 { 00:09:40.194 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:40.194 "dma_device_type": 2 00:09:40.194 } 00:09:40.194 ], 00:09:40.194 "driver_specific": {} 00:09:40.194 } 00:09:40.194 ] 00:09:40.194 19:07:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.194 19:07:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:40.194 19:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:09:40.194 19:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:40.194 19:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:40.194 19:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:40.194 19:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:40.194 19:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:40.194 19:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:40.194 19:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:40.194 19:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:40.194 19:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:40.194 19:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:40.195 19:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:40.195 19:07:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.195 19:07:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.195 19:07:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.195 19:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:40.195 "name": "Existed_Raid", 00:09:40.195 "uuid": "6feec714-d326-46aa-8fb2-83aad742eacd", 00:09:40.195 "strip_size_kb": 64, 00:09:40.195 "state": "online", 00:09:40.195 "raid_level": "raid0", 00:09:40.195 "superblock": true, 00:09:40.195 "num_base_bdevs": 3, 00:09:40.195 "num_base_bdevs_discovered": 3, 00:09:40.195 "num_base_bdevs_operational": 3, 00:09:40.195 "base_bdevs_list": [ 00:09:40.195 { 00:09:40.195 "name": "NewBaseBdev", 00:09:40.195 "uuid": "cffbf590-e897-40ca-ac4f-3eea63dd2cb4", 00:09:40.195 "is_configured": true, 00:09:40.195 "data_offset": 2048, 00:09:40.195 "data_size": 63488 00:09:40.195 }, 00:09:40.195 { 00:09:40.195 "name": "BaseBdev2", 00:09:40.195 "uuid": "f3a48957-79a0-4e84-a7fc-f3aef32265d9", 00:09:40.195 "is_configured": true, 00:09:40.195 "data_offset": 2048, 00:09:40.195 "data_size": 63488 00:09:40.195 }, 00:09:40.195 { 00:09:40.195 "name": "BaseBdev3", 00:09:40.195 "uuid": "111a087d-2372-465f-9cbd-1d7cd3396f89", 00:09:40.195 "is_configured": true, 00:09:40.195 "data_offset": 2048, 00:09:40.195 "data_size": 63488 00:09:40.195 } 00:09:40.195 ] 00:09:40.195 }' 00:09:40.195 19:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:40.195 19:07:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.763 19:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:40.763 19:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:40.763 19:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:40.763 19:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:40.763 19:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:40.763 19:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:40.763 19:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:40.763 19:07:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.763 19:07:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.763 19:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:40.763 [2024-11-27 19:07:50.112309] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:40.763 19:07:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.763 19:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:40.763 "name": "Existed_Raid", 00:09:40.763 "aliases": [ 00:09:40.763 "6feec714-d326-46aa-8fb2-83aad742eacd" 00:09:40.763 ], 00:09:40.763 "product_name": "Raid Volume", 00:09:40.763 "block_size": 512, 00:09:40.763 "num_blocks": 190464, 00:09:40.763 "uuid": "6feec714-d326-46aa-8fb2-83aad742eacd", 00:09:40.763 "assigned_rate_limits": { 00:09:40.763 "rw_ios_per_sec": 0, 00:09:40.763 "rw_mbytes_per_sec": 0, 00:09:40.763 "r_mbytes_per_sec": 0, 00:09:40.763 "w_mbytes_per_sec": 0 00:09:40.763 }, 00:09:40.763 "claimed": false, 00:09:40.763 "zoned": false, 00:09:40.763 "supported_io_types": { 00:09:40.763 "read": true, 00:09:40.763 "write": true, 00:09:40.763 "unmap": true, 00:09:40.763 "flush": true, 00:09:40.763 "reset": true, 00:09:40.763 "nvme_admin": false, 00:09:40.763 "nvme_io": false, 00:09:40.763 "nvme_io_md": false, 00:09:40.763 "write_zeroes": true, 00:09:40.763 "zcopy": false, 00:09:40.763 "get_zone_info": false, 00:09:40.763 "zone_management": false, 00:09:40.763 "zone_append": false, 00:09:40.763 "compare": false, 00:09:40.763 "compare_and_write": false, 00:09:40.763 "abort": false, 00:09:40.763 "seek_hole": false, 00:09:40.763 "seek_data": false, 00:09:40.763 "copy": false, 00:09:40.763 "nvme_iov_md": false 00:09:40.763 }, 00:09:40.763 "memory_domains": [ 00:09:40.763 { 00:09:40.763 "dma_device_id": "system", 00:09:40.763 "dma_device_type": 1 00:09:40.763 }, 00:09:40.763 { 00:09:40.763 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:40.763 "dma_device_type": 2 00:09:40.763 }, 00:09:40.763 { 00:09:40.763 "dma_device_id": "system", 00:09:40.763 "dma_device_type": 1 00:09:40.763 }, 00:09:40.763 { 00:09:40.763 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:40.763 "dma_device_type": 2 00:09:40.763 }, 00:09:40.763 { 00:09:40.763 "dma_device_id": "system", 00:09:40.763 "dma_device_type": 1 00:09:40.763 }, 00:09:40.763 { 00:09:40.763 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:40.763 "dma_device_type": 2 00:09:40.763 } 00:09:40.763 ], 00:09:40.763 "driver_specific": { 00:09:40.763 "raid": { 00:09:40.763 "uuid": "6feec714-d326-46aa-8fb2-83aad742eacd", 00:09:40.763 "strip_size_kb": 64, 00:09:40.763 "state": "online", 00:09:40.763 "raid_level": "raid0", 00:09:40.763 "superblock": true, 00:09:40.763 "num_base_bdevs": 3, 00:09:40.763 "num_base_bdevs_discovered": 3, 00:09:40.763 "num_base_bdevs_operational": 3, 00:09:40.763 "base_bdevs_list": [ 00:09:40.763 { 00:09:40.763 "name": "NewBaseBdev", 00:09:40.763 "uuid": "cffbf590-e897-40ca-ac4f-3eea63dd2cb4", 00:09:40.763 "is_configured": true, 00:09:40.763 "data_offset": 2048, 00:09:40.763 "data_size": 63488 00:09:40.763 }, 00:09:40.763 { 00:09:40.763 "name": "BaseBdev2", 00:09:40.763 "uuid": "f3a48957-79a0-4e84-a7fc-f3aef32265d9", 00:09:40.763 "is_configured": true, 00:09:40.763 "data_offset": 2048, 00:09:40.763 "data_size": 63488 00:09:40.763 }, 00:09:40.763 { 00:09:40.763 "name": "BaseBdev3", 00:09:40.763 "uuid": "111a087d-2372-465f-9cbd-1d7cd3396f89", 00:09:40.763 "is_configured": true, 00:09:40.763 "data_offset": 2048, 00:09:40.763 "data_size": 63488 00:09:40.763 } 00:09:40.763 ] 00:09:40.763 } 00:09:40.763 } 00:09:40.763 }' 00:09:40.763 19:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:40.763 19:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:40.763 BaseBdev2 00:09:40.763 BaseBdev3' 00:09:40.763 19:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:40.763 19:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:40.763 19:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:40.763 19:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:40.763 19:07:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.763 19:07:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.763 19:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:40.763 19:07:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.763 19:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:40.764 19:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:40.764 19:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:40.764 19:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:40.764 19:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:40.764 19:07:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.764 19:07:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.764 19:07:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.764 19:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:40.764 19:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:40.764 19:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:40.764 19:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:40.764 19:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:40.764 19:07:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.764 19:07:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.764 19:07:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.764 19:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:40.764 19:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:40.764 19:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:40.764 19:07:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.764 19:07:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.764 [2024-11-27 19:07:50.391525] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:40.764 [2024-11-27 19:07:50.391553] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:40.764 [2024-11-27 19:07:50.391633] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:40.764 [2024-11-27 19:07:50.391692] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:40.764 [2024-11-27 19:07:50.391705] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:09:41.029 19:07:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.029 19:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 64542 00:09:41.029 19:07:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 64542 ']' 00:09:41.029 19:07:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 64542 00:09:41.029 19:07:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:09:41.029 19:07:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:41.029 19:07:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64542 00:09:41.029 19:07:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:41.029 19:07:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:41.029 19:07:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64542' 00:09:41.029 killing process with pid 64542 00:09:41.029 19:07:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 64542 00:09:41.029 [2024-11-27 19:07:50.442435] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:41.029 19:07:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 64542 00:09:41.289 [2024-11-27 19:07:50.770169] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:42.665 19:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:09:42.665 ************************************ 00:09:42.665 END TEST raid_state_function_test_sb 00:09:42.665 ************************************ 00:09:42.665 00:09:42.665 real 0m10.730s 00:09:42.665 user 0m16.753s 00:09:42.665 sys 0m2.053s 00:09:42.665 19:07:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:42.665 19:07:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.665 19:07:52 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:09:42.665 19:07:52 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:42.665 19:07:52 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:42.665 19:07:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:42.665 ************************************ 00:09:42.665 START TEST raid_superblock_test 00:09:42.665 ************************************ 00:09:42.665 19:07:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 3 00:09:42.665 19:07:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:09:42.665 19:07:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:09:42.665 19:07:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:09:42.665 19:07:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:09:42.665 19:07:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:09:42.665 19:07:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:09:42.665 19:07:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:09:42.665 19:07:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:09:42.665 19:07:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:09:42.665 19:07:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:09:42.665 19:07:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:09:42.665 19:07:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:09:42.665 19:07:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:09:42.665 19:07:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:09:42.665 19:07:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:09:42.665 19:07:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:09:42.665 19:07:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=65162 00:09:42.665 19:07:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:09:42.665 19:07:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 65162 00:09:42.665 19:07:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 65162 ']' 00:09:42.665 19:07:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:42.665 19:07:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:42.665 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:42.665 19:07:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:42.665 19:07:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:42.665 19:07:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.665 [2024-11-27 19:07:52.146964] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:09:42.665 [2024-11-27 19:07:52.147168] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65162 ] 00:09:42.923 [2024-11-27 19:07:52.328879] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:42.923 [2024-11-27 19:07:52.464228] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:43.181 [2024-11-27 19:07:52.702419] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:43.181 [2024-11-27 19:07:52.702469] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:43.438 19:07:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:43.438 19:07:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:09:43.438 19:07:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:09:43.438 19:07:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:43.438 19:07:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:09:43.438 19:07:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:09:43.438 19:07:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:43.438 19:07:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:43.438 19:07:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:43.438 19:07:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:43.439 19:07:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:09:43.439 19:07:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.439 19:07:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.439 malloc1 00:09:43.439 19:07:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.439 19:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:43.439 19:07:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.439 19:07:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.439 [2024-11-27 19:07:53.041103] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:43.439 [2024-11-27 19:07:53.041172] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:43.439 [2024-11-27 19:07:53.041196] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:43.439 [2024-11-27 19:07:53.041206] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:43.439 [2024-11-27 19:07:53.043762] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:43.439 [2024-11-27 19:07:53.043799] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:43.439 pt1 00:09:43.439 19:07:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.439 19:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:43.439 19:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:43.439 19:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:09:43.439 19:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:09:43.439 19:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:43.439 19:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:43.439 19:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:43.439 19:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:43.439 19:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:09:43.439 19:07:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.439 19:07:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.696 malloc2 00:09:43.696 19:07:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.696 19:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:43.696 19:07:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.696 19:07:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.696 [2024-11-27 19:07:53.102768] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:43.696 [2024-11-27 19:07:53.102876] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:43.696 [2024-11-27 19:07:53.102931] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:09:43.697 [2024-11-27 19:07:53.102962] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:43.697 [2024-11-27 19:07:53.105506] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:43.697 [2024-11-27 19:07:53.105593] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:43.697 pt2 00:09:43.697 19:07:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.697 19:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:43.697 19:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:43.697 19:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:09:43.697 19:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:09:43.697 19:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:09:43.697 19:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:43.697 19:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:43.697 19:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:43.697 19:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:09:43.697 19:07:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.697 19:07:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.697 malloc3 00:09:43.697 19:07:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.697 19:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:43.697 19:07:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.697 19:07:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.697 [2024-11-27 19:07:53.182974] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:43.697 [2024-11-27 19:07:53.183067] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:43.697 [2024-11-27 19:07:53.183108] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:09:43.697 [2024-11-27 19:07:53.183137] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:43.697 [2024-11-27 19:07:53.185518] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:43.697 [2024-11-27 19:07:53.185604] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:43.697 pt3 00:09:43.697 19:07:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.697 19:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:43.697 19:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:43.697 19:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:09:43.697 19:07:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.697 19:07:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.697 [2024-11-27 19:07:53.194998] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:43.697 [2024-11-27 19:07:53.197123] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:43.697 [2024-11-27 19:07:53.197248] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:43.697 [2024-11-27 19:07:53.197423] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:09:43.697 [2024-11-27 19:07:53.197437] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:43.697 [2024-11-27 19:07:53.197675] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:43.697 [2024-11-27 19:07:53.197865] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:09:43.697 [2024-11-27 19:07:53.197874] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:09:43.697 [2024-11-27 19:07:53.198052] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:43.697 19:07:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.697 19:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:43.697 19:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:43.697 19:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:43.697 19:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:43.697 19:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:43.697 19:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:43.697 19:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:43.697 19:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:43.697 19:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:43.697 19:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:43.697 19:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:43.697 19:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:43.697 19:07:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.697 19:07:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.697 19:07:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.697 19:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:43.697 "name": "raid_bdev1", 00:09:43.697 "uuid": "070f6130-a318-494c-87d0-b7992cd09cfd", 00:09:43.697 "strip_size_kb": 64, 00:09:43.697 "state": "online", 00:09:43.697 "raid_level": "raid0", 00:09:43.697 "superblock": true, 00:09:43.697 "num_base_bdevs": 3, 00:09:43.697 "num_base_bdevs_discovered": 3, 00:09:43.697 "num_base_bdevs_operational": 3, 00:09:43.697 "base_bdevs_list": [ 00:09:43.697 { 00:09:43.697 "name": "pt1", 00:09:43.697 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:43.697 "is_configured": true, 00:09:43.697 "data_offset": 2048, 00:09:43.697 "data_size": 63488 00:09:43.697 }, 00:09:43.697 { 00:09:43.697 "name": "pt2", 00:09:43.697 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:43.697 "is_configured": true, 00:09:43.697 "data_offset": 2048, 00:09:43.697 "data_size": 63488 00:09:43.697 }, 00:09:43.697 { 00:09:43.697 "name": "pt3", 00:09:43.697 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:43.697 "is_configured": true, 00:09:43.697 "data_offset": 2048, 00:09:43.697 "data_size": 63488 00:09:43.697 } 00:09:43.697 ] 00:09:43.697 }' 00:09:43.697 19:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:43.697 19:07:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.955 19:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:09:43.955 19:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:43.955 19:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:43.955 19:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:43.955 19:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:43.955 19:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:44.213 19:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:44.213 19:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:44.213 19:07:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.213 19:07:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.213 [2024-11-27 19:07:53.602560] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:44.213 19:07:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.213 19:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:44.213 "name": "raid_bdev1", 00:09:44.213 "aliases": [ 00:09:44.213 "070f6130-a318-494c-87d0-b7992cd09cfd" 00:09:44.213 ], 00:09:44.213 "product_name": "Raid Volume", 00:09:44.213 "block_size": 512, 00:09:44.213 "num_blocks": 190464, 00:09:44.213 "uuid": "070f6130-a318-494c-87d0-b7992cd09cfd", 00:09:44.213 "assigned_rate_limits": { 00:09:44.213 "rw_ios_per_sec": 0, 00:09:44.213 "rw_mbytes_per_sec": 0, 00:09:44.213 "r_mbytes_per_sec": 0, 00:09:44.213 "w_mbytes_per_sec": 0 00:09:44.213 }, 00:09:44.213 "claimed": false, 00:09:44.213 "zoned": false, 00:09:44.213 "supported_io_types": { 00:09:44.213 "read": true, 00:09:44.213 "write": true, 00:09:44.213 "unmap": true, 00:09:44.213 "flush": true, 00:09:44.213 "reset": true, 00:09:44.213 "nvme_admin": false, 00:09:44.213 "nvme_io": false, 00:09:44.213 "nvme_io_md": false, 00:09:44.213 "write_zeroes": true, 00:09:44.213 "zcopy": false, 00:09:44.213 "get_zone_info": false, 00:09:44.213 "zone_management": false, 00:09:44.213 "zone_append": false, 00:09:44.213 "compare": false, 00:09:44.213 "compare_and_write": false, 00:09:44.213 "abort": false, 00:09:44.213 "seek_hole": false, 00:09:44.213 "seek_data": false, 00:09:44.213 "copy": false, 00:09:44.213 "nvme_iov_md": false 00:09:44.213 }, 00:09:44.213 "memory_domains": [ 00:09:44.213 { 00:09:44.213 "dma_device_id": "system", 00:09:44.213 "dma_device_type": 1 00:09:44.213 }, 00:09:44.213 { 00:09:44.213 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:44.213 "dma_device_type": 2 00:09:44.213 }, 00:09:44.213 { 00:09:44.213 "dma_device_id": "system", 00:09:44.213 "dma_device_type": 1 00:09:44.213 }, 00:09:44.213 { 00:09:44.213 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:44.213 "dma_device_type": 2 00:09:44.213 }, 00:09:44.213 { 00:09:44.213 "dma_device_id": "system", 00:09:44.213 "dma_device_type": 1 00:09:44.213 }, 00:09:44.213 { 00:09:44.213 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:44.213 "dma_device_type": 2 00:09:44.213 } 00:09:44.213 ], 00:09:44.213 "driver_specific": { 00:09:44.213 "raid": { 00:09:44.213 "uuid": "070f6130-a318-494c-87d0-b7992cd09cfd", 00:09:44.213 "strip_size_kb": 64, 00:09:44.213 "state": "online", 00:09:44.213 "raid_level": "raid0", 00:09:44.213 "superblock": true, 00:09:44.213 "num_base_bdevs": 3, 00:09:44.213 "num_base_bdevs_discovered": 3, 00:09:44.213 "num_base_bdevs_operational": 3, 00:09:44.213 "base_bdevs_list": [ 00:09:44.213 { 00:09:44.213 "name": "pt1", 00:09:44.213 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:44.213 "is_configured": true, 00:09:44.213 "data_offset": 2048, 00:09:44.213 "data_size": 63488 00:09:44.213 }, 00:09:44.213 { 00:09:44.213 "name": "pt2", 00:09:44.213 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:44.213 "is_configured": true, 00:09:44.213 "data_offset": 2048, 00:09:44.213 "data_size": 63488 00:09:44.213 }, 00:09:44.213 { 00:09:44.213 "name": "pt3", 00:09:44.213 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:44.213 "is_configured": true, 00:09:44.213 "data_offset": 2048, 00:09:44.213 "data_size": 63488 00:09:44.213 } 00:09:44.213 ] 00:09:44.213 } 00:09:44.213 } 00:09:44.213 }' 00:09:44.213 19:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:44.213 19:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:44.213 pt2 00:09:44.213 pt3' 00:09:44.213 19:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:44.213 19:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:44.213 19:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:44.213 19:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:44.213 19:07:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.213 19:07:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.213 19:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:44.213 19:07:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.213 19:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:44.213 19:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:44.213 19:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:44.213 19:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:44.213 19:07:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.213 19:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:44.213 19:07:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.213 19:07:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.213 19:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:44.213 19:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:44.213 19:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:44.213 19:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:44.213 19:07:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.213 19:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:44.213 19:07:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.473 19:07:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.473 19:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:44.473 19:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:44.473 19:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:09:44.473 19:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:44.473 19:07:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.473 19:07:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.473 [2024-11-27 19:07:53.886026] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:44.473 19:07:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.473 19:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=070f6130-a318-494c-87d0-b7992cd09cfd 00:09:44.473 19:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 070f6130-a318-494c-87d0-b7992cd09cfd ']' 00:09:44.473 19:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:44.473 19:07:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.473 19:07:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.473 [2024-11-27 19:07:53.933683] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:44.473 [2024-11-27 19:07:53.933711] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:44.473 [2024-11-27 19:07:53.933798] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:44.473 [2024-11-27 19:07:53.933868] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:44.473 [2024-11-27 19:07:53.933879] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:09:44.473 19:07:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.473 19:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:44.473 19:07:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.473 19:07:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.473 19:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:09:44.473 19:07:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.473 19:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:09:44.473 19:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:09:44.473 19:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:44.474 19:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:09:44.474 19:07:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.474 19:07:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.474 19:07:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.474 19:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:44.474 19:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:09:44.474 19:07:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.474 19:07:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.474 19:07:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.474 19:07:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:44.474 19:07:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:09:44.474 19:07:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.474 19:07:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.474 19:07:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.474 19:07:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:09:44.474 19:07:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:44.474 19:07:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.474 19:07:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.474 19:07:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.474 19:07:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:09:44.474 19:07:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:44.474 19:07:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:09:44.474 19:07:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:44.474 19:07:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:09:44.474 19:07:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:44.474 19:07:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:09:44.474 19:07:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:44.474 19:07:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:44.474 19:07:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.474 19:07:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.474 [2024-11-27 19:07:54.077485] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:44.474 [2024-11-27 19:07:54.079723] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:44.474 [2024-11-27 19:07:54.079777] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:09:44.474 [2024-11-27 19:07:54.079829] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:44.474 [2024-11-27 19:07:54.079878] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:44.474 [2024-11-27 19:07:54.079897] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:09:44.474 [2024-11-27 19:07:54.079914] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:44.474 [2024-11-27 19:07:54.079926] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:09:44.474 request: 00:09:44.474 { 00:09:44.474 "name": "raid_bdev1", 00:09:44.474 "raid_level": "raid0", 00:09:44.474 "base_bdevs": [ 00:09:44.474 "malloc1", 00:09:44.474 "malloc2", 00:09:44.474 "malloc3" 00:09:44.474 ], 00:09:44.474 "strip_size_kb": 64, 00:09:44.474 "superblock": false, 00:09:44.474 "method": "bdev_raid_create", 00:09:44.474 "req_id": 1 00:09:44.474 } 00:09:44.474 Got JSON-RPC error response 00:09:44.474 response: 00:09:44.474 { 00:09:44.474 "code": -17, 00:09:44.474 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:44.474 } 00:09:44.474 19:07:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:44.474 19:07:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:09:44.474 19:07:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:44.474 19:07:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:44.474 19:07:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:44.474 19:07:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:44.474 19:07:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.474 19:07:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.474 19:07:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:09:44.474 19:07:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.734 19:07:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:09:44.734 19:07:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:09:44.734 19:07:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:44.734 19:07:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.734 19:07:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.734 [2024-11-27 19:07:54.145312] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:44.734 [2024-11-27 19:07:54.145401] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:44.734 [2024-11-27 19:07:54.145437] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:09:44.734 [2024-11-27 19:07:54.145464] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:44.734 [2024-11-27 19:07:54.148082] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:44.734 [2024-11-27 19:07:54.148168] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:44.734 [2024-11-27 19:07:54.148267] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:44.734 [2024-11-27 19:07:54.148334] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:44.734 pt1 00:09:44.734 19:07:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.734 19:07:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:09:44.734 19:07:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:44.734 19:07:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:44.734 19:07:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:44.734 19:07:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:44.734 19:07:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:44.734 19:07:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:44.734 19:07:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:44.734 19:07:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:44.734 19:07:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:44.734 19:07:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:44.734 19:07:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.734 19:07:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.734 19:07:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:44.734 19:07:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.734 19:07:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:44.734 "name": "raid_bdev1", 00:09:44.734 "uuid": "070f6130-a318-494c-87d0-b7992cd09cfd", 00:09:44.734 "strip_size_kb": 64, 00:09:44.734 "state": "configuring", 00:09:44.734 "raid_level": "raid0", 00:09:44.734 "superblock": true, 00:09:44.734 "num_base_bdevs": 3, 00:09:44.734 "num_base_bdevs_discovered": 1, 00:09:44.734 "num_base_bdevs_operational": 3, 00:09:44.734 "base_bdevs_list": [ 00:09:44.734 { 00:09:44.734 "name": "pt1", 00:09:44.734 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:44.734 "is_configured": true, 00:09:44.734 "data_offset": 2048, 00:09:44.734 "data_size": 63488 00:09:44.734 }, 00:09:44.734 { 00:09:44.734 "name": null, 00:09:44.734 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:44.734 "is_configured": false, 00:09:44.734 "data_offset": 2048, 00:09:44.734 "data_size": 63488 00:09:44.734 }, 00:09:44.734 { 00:09:44.734 "name": null, 00:09:44.734 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:44.734 "is_configured": false, 00:09:44.734 "data_offset": 2048, 00:09:44.734 "data_size": 63488 00:09:44.734 } 00:09:44.734 ] 00:09:44.734 }' 00:09:44.734 19:07:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:44.734 19:07:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.993 19:07:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:09:44.993 19:07:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:44.994 19:07:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.994 19:07:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.994 [2024-11-27 19:07:54.616568] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:44.994 [2024-11-27 19:07:54.616647] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:44.994 [2024-11-27 19:07:54.616679] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:09:44.994 [2024-11-27 19:07:54.616688] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:44.994 [2024-11-27 19:07:54.617221] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:44.994 [2024-11-27 19:07:54.617240] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:44.994 [2024-11-27 19:07:54.617337] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:44.994 [2024-11-27 19:07:54.617368] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:44.994 pt2 00:09:44.994 19:07:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.994 19:07:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:09:44.994 19:07:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.994 19:07:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.253 [2024-11-27 19:07:54.628556] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:09:45.253 19:07:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.253 19:07:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:09:45.253 19:07:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:45.253 19:07:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:45.253 19:07:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:45.253 19:07:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:45.253 19:07:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:45.253 19:07:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:45.253 19:07:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:45.253 19:07:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:45.253 19:07:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:45.253 19:07:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:45.253 19:07:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.253 19:07:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:45.253 19:07:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.253 19:07:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.253 19:07:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:45.253 "name": "raid_bdev1", 00:09:45.253 "uuid": "070f6130-a318-494c-87d0-b7992cd09cfd", 00:09:45.253 "strip_size_kb": 64, 00:09:45.253 "state": "configuring", 00:09:45.253 "raid_level": "raid0", 00:09:45.253 "superblock": true, 00:09:45.253 "num_base_bdevs": 3, 00:09:45.253 "num_base_bdevs_discovered": 1, 00:09:45.253 "num_base_bdevs_operational": 3, 00:09:45.253 "base_bdevs_list": [ 00:09:45.253 { 00:09:45.253 "name": "pt1", 00:09:45.253 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:45.253 "is_configured": true, 00:09:45.253 "data_offset": 2048, 00:09:45.253 "data_size": 63488 00:09:45.253 }, 00:09:45.253 { 00:09:45.253 "name": null, 00:09:45.253 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:45.253 "is_configured": false, 00:09:45.253 "data_offset": 0, 00:09:45.253 "data_size": 63488 00:09:45.253 }, 00:09:45.253 { 00:09:45.253 "name": null, 00:09:45.253 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:45.253 "is_configured": false, 00:09:45.253 "data_offset": 2048, 00:09:45.253 "data_size": 63488 00:09:45.253 } 00:09:45.253 ] 00:09:45.253 }' 00:09:45.253 19:07:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:45.253 19:07:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.512 19:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:09:45.512 19:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:45.512 19:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:45.512 19:07:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.512 19:07:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.512 [2024-11-27 19:07:55.075776] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:45.512 [2024-11-27 19:07:55.075905] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:45.512 [2024-11-27 19:07:55.075953] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:09:45.512 [2024-11-27 19:07:55.075989] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:45.512 [2024-11-27 19:07:55.076574] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:45.512 [2024-11-27 19:07:55.076636] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:45.512 [2024-11-27 19:07:55.076774] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:45.512 [2024-11-27 19:07:55.076833] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:45.512 pt2 00:09:45.512 19:07:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.512 19:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:45.512 19:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:45.512 19:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:45.512 19:07:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.512 19:07:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.512 [2024-11-27 19:07:55.083717] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:45.512 [2024-11-27 19:07:55.083797] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:45.512 [2024-11-27 19:07:55.083836] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:09:45.512 [2024-11-27 19:07:55.083867] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:45.512 [2024-11-27 19:07:55.084304] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:45.512 [2024-11-27 19:07:55.084367] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:45.512 [2024-11-27 19:07:55.084450] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:45.512 [2024-11-27 19:07:55.084499] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:45.512 [2024-11-27 19:07:55.084662] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:45.512 [2024-11-27 19:07:55.084711] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:45.512 [2024-11-27 19:07:55.084996] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:45.512 [2024-11-27 19:07:55.085187] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:45.512 [2024-11-27 19:07:55.085224] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:45.512 [2024-11-27 19:07:55.085412] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:45.512 pt3 00:09:45.512 19:07:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.512 19:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:45.512 19:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:45.512 19:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:45.512 19:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:45.512 19:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:45.512 19:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:45.512 19:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:45.512 19:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:45.512 19:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:45.512 19:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:45.512 19:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:45.512 19:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:45.512 19:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:45.512 19:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:45.512 19:07:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.512 19:07:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.512 19:07:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.512 19:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:45.512 "name": "raid_bdev1", 00:09:45.512 "uuid": "070f6130-a318-494c-87d0-b7992cd09cfd", 00:09:45.512 "strip_size_kb": 64, 00:09:45.512 "state": "online", 00:09:45.512 "raid_level": "raid0", 00:09:45.512 "superblock": true, 00:09:45.512 "num_base_bdevs": 3, 00:09:45.512 "num_base_bdevs_discovered": 3, 00:09:45.512 "num_base_bdevs_operational": 3, 00:09:45.512 "base_bdevs_list": [ 00:09:45.512 { 00:09:45.512 "name": "pt1", 00:09:45.512 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:45.512 "is_configured": true, 00:09:45.512 "data_offset": 2048, 00:09:45.512 "data_size": 63488 00:09:45.512 }, 00:09:45.512 { 00:09:45.512 "name": "pt2", 00:09:45.512 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:45.512 "is_configured": true, 00:09:45.512 "data_offset": 2048, 00:09:45.512 "data_size": 63488 00:09:45.512 }, 00:09:45.512 { 00:09:45.512 "name": "pt3", 00:09:45.512 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:45.512 "is_configured": true, 00:09:45.512 "data_offset": 2048, 00:09:45.512 "data_size": 63488 00:09:45.512 } 00:09:45.512 ] 00:09:45.512 }' 00:09:45.512 19:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:45.512 19:07:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.100 19:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:09:46.100 19:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:46.100 19:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:46.100 19:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:46.100 19:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:46.100 19:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:46.100 19:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:46.100 19:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:46.100 19:07:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.100 19:07:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.100 [2024-11-27 19:07:55.559205] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:46.100 19:07:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.101 19:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:46.101 "name": "raid_bdev1", 00:09:46.101 "aliases": [ 00:09:46.101 "070f6130-a318-494c-87d0-b7992cd09cfd" 00:09:46.101 ], 00:09:46.101 "product_name": "Raid Volume", 00:09:46.101 "block_size": 512, 00:09:46.101 "num_blocks": 190464, 00:09:46.101 "uuid": "070f6130-a318-494c-87d0-b7992cd09cfd", 00:09:46.101 "assigned_rate_limits": { 00:09:46.101 "rw_ios_per_sec": 0, 00:09:46.101 "rw_mbytes_per_sec": 0, 00:09:46.101 "r_mbytes_per_sec": 0, 00:09:46.101 "w_mbytes_per_sec": 0 00:09:46.101 }, 00:09:46.101 "claimed": false, 00:09:46.101 "zoned": false, 00:09:46.101 "supported_io_types": { 00:09:46.101 "read": true, 00:09:46.101 "write": true, 00:09:46.101 "unmap": true, 00:09:46.101 "flush": true, 00:09:46.101 "reset": true, 00:09:46.101 "nvme_admin": false, 00:09:46.101 "nvme_io": false, 00:09:46.101 "nvme_io_md": false, 00:09:46.101 "write_zeroes": true, 00:09:46.101 "zcopy": false, 00:09:46.101 "get_zone_info": false, 00:09:46.101 "zone_management": false, 00:09:46.101 "zone_append": false, 00:09:46.101 "compare": false, 00:09:46.101 "compare_and_write": false, 00:09:46.101 "abort": false, 00:09:46.101 "seek_hole": false, 00:09:46.101 "seek_data": false, 00:09:46.101 "copy": false, 00:09:46.101 "nvme_iov_md": false 00:09:46.101 }, 00:09:46.101 "memory_domains": [ 00:09:46.101 { 00:09:46.101 "dma_device_id": "system", 00:09:46.101 "dma_device_type": 1 00:09:46.101 }, 00:09:46.101 { 00:09:46.101 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:46.101 "dma_device_type": 2 00:09:46.101 }, 00:09:46.101 { 00:09:46.101 "dma_device_id": "system", 00:09:46.101 "dma_device_type": 1 00:09:46.101 }, 00:09:46.101 { 00:09:46.101 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:46.101 "dma_device_type": 2 00:09:46.101 }, 00:09:46.101 { 00:09:46.101 "dma_device_id": "system", 00:09:46.101 "dma_device_type": 1 00:09:46.101 }, 00:09:46.101 { 00:09:46.101 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:46.101 "dma_device_type": 2 00:09:46.101 } 00:09:46.101 ], 00:09:46.101 "driver_specific": { 00:09:46.101 "raid": { 00:09:46.101 "uuid": "070f6130-a318-494c-87d0-b7992cd09cfd", 00:09:46.101 "strip_size_kb": 64, 00:09:46.101 "state": "online", 00:09:46.101 "raid_level": "raid0", 00:09:46.101 "superblock": true, 00:09:46.101 "num_base_bdevs": 3, 00:09:46.101 "num_base_bdevs_discovered": 3, 00:09:46.101 "num_base_bdevs_operational": 3, 00:09:46.101 "base_bdevs_list": [ 00:09:46.101 { 00:09:46.101 "name": "pt1", 00:09:46.101 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:46.101 "is_configured": true, 00:09:46.101 "data_offset": 2048, 00:09:46.101 "data_size": 63488 00:09:46.101 }, 00:09:46.101 { 00:09:46.101 "name": "pt2", 00:09:46.101 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:46.101 "is_configured": true, 00:09:46.101 "data_offset": 2048, 00:09:46.101 "data_size": 63488 00:09:46.101 }, 00:09:46.101 { 00:09:46.101 "name": "pt3", 00:09:46.101 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:46.101 "is_configured": true, 00:09:46.101 "data_offset": 2048, 00:09:46.101 "data_size": 63488 00:09:46.101 } 00:09:46.101 ] 00:09:46.101 } 00:09:46.101 } 00:09:46.101 }' 00:09:46.101 19:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:46.101 19:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:46.101 pt2 00:09:46.101 pt3' 00:09:46.101 19:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:46.101 19:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:46.101 19:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:46.101 19:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:46.101 19:07:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.101 19:07:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.101 19:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:46.101 19:07:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.361 19:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:46.361 19:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:46.361 19:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:46.361 19:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:46.361 19:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:46.361 19:07:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.361 19:07:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.361 19:07:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.361 19:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:46.361 19:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:46.361 19:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:46.361 19:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:46.361 19:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:46.361 19:07:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.361 19:07:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.361 19:07:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.361 19:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:46.361 19:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:46.361 19:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:46.361 19:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:09:46.361 19:07:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.361 19:07:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.361 [2024-11-27 19:07:55.850651] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:46.361 19:07:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.361 19:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 070f6130-a318-494c-87d0-b7992cd09cfd '!=' 070f6130-a318-494c-87d0-b7992cd09cfd ']' 00:09:46.361 19:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:09:46.361 19:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:46.361 19:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:46.361 19:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 65162 00:09:46.361 19:07:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 65162 ']' 00:09:46.361 19:07:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 65162 00:09:46.361 19:07:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:09:46.361 19:07:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:46.361 19:07:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65162 00:09:46.361 19:07:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:46.361 19:07:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:46.361 killing process with pid 65162 00:09:46.361 19:07:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65162' 00:09:46.361 19:07:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 65162 00:09:46.361 [2024-11-27 19:07:55.934552] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:46.361 [2024-11-27 19:07:55.934663] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:46.361 [2024-11-27 19:07:55.934749] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:46.361 [2024-11-27 19:07:55.934763] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:46.361 19:07:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 65162 00:09:46.930 [2024-11-27 19:07:56.258863] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:47.869 ************************************ 00:09:47.869 END TEST raid_superblock_test 00:09:47.869 ************************************ 00:09:47.869 19:07:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:09:47.869 00:09:47.869 real 0m5.391s 00:09:47.869 user 0m7.552s 00:09:47.869 sys 0m1.036s 00:09:47.869 19:07:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:47.869 19:07:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.869 19:07:57 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 3 read 00:09:47.869 19:07:57 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:47.869 19:07:57 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:47.869 19:07:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:48.129 ************************************ 00:09:48.129 START TEST raid_read_error_test 00:09:48.129 ************************************ 00:09:48.129 19:07:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 3 read 00:09:48.129 19:07:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:09:48.129 19:07:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:48.129 19:07:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:09:48.129 19:07:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:48.129 19:07:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:48.129 19:07:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:48.129 19:07:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:48.129 19:07:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:48.129 19:07:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:48.129 19:07:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:48.129 19:07:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:48.129 19:07:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:48.129 19:07:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:48.129 19:07:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:48.129 19:07:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:48.129 19:07:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:48.129 19:07:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:48.129 19:07:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:48.129 19:07:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:48.129 19:07:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:48.129 19:07:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:48.129 19:07:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:09:48.129 19:07:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:48.129 19:07:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:48.129 19:07:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:48.129 19:07:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.VQJIhUc0d3 00:09:48.129 19:07:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65415 00:09:48.129 19:07:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65415 00:09:48.129 19:07:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:48.129 19:07:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 65415 ']' 00:09:48.129 19:07:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:48.129 19:07:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:48.129 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:48.129 19:07:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:48.129 19:07:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:48.129 19:07:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.129 [2024-11-27 19:07:57.627437] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:09:48.129 [2024-11-27 19:07:57.627581] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65415 ] 00:09:48.388 [2024-11-27 19:07:57.806947] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:48.388 [2024-11-27 19:07:57.945134] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:48.647 [2024-11-27 19:07:58.179081] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:48.647 [2024-11-27 19:07:58.179158] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:48.906 19:07:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:48.906 19:07:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:48.906 19:07:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:48.906 19:07:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:48.906 19:07:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.906 19:07:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.906 BaseBdev1_malloc 00:09:48.906 19:07:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.906 19:07:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:48.906 19:07:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.906 19:07:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.906 true 00:09:48.906 19:07:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.906 19:07:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:48.906 19:07:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.906 19:07:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.906 [2024-11-27 19:07:58.516965] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:48.906 [2024-11-27 19:07:58.517087] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:48.906 [2024-11-27 19:07:58.517113] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:48.906 [2024-11-27 19:07:58.517125] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:48.906 [2024-11-27 19:07:58.519541] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:48.906 [2024-11-27 19:07:58.519582] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:48.906 BaseBdev1 00:09:48.906 19:07:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.906 19:07:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:48.906 19:07:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:48.906 19:07:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.906 19:07:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.165 BaseBdev2_malloc 00:09:49.165 19:07:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.165 19:07:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:49.165 19:07:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.165 19:07:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.165 true 00:09:49.165 19:07:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.165 19:07:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:49.165 19:07:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.165 19:07:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.165 [2024-11-27 19:07:58.589741] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:49.165 [2024-11-27 19:07:58.589798] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:49.165 [2024-11-27 19:07:58.589815] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:49.165 [2024-11-27 19:07:58.589826] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:49.165 [2024-11-27 19:07:58.592256] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:49.165 [2024-11-27 19:07:58.592299] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:49.165 BaseBdev2 00:09:49.165 19:07:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.165 19:07:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:49.165 19:07:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:49.165 19:07:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.165 19:07:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.165 BaseBdev3_malloc 00:09:49.165 19:07:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.165 19:07:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:49.165 19:07:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.165 19:07:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.165 true 00:09:49.165 19:07:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.165 19:07:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:49.165 19:07:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.165 19:07:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.165 [2024-11-27 19:07:58.681493] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:49.165 [2024-11-27 19:07:58.681555] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:49.165 [2024-11-27 19:07:58.681574] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:49.165 [2024-11-27 19:07:58.681585] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:49.165 [2024-11-27 19:07:58.684213] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:49.165 [2024-11-27 19:07:58.684318] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:49.165 BaseBdev3 00:09:49.166 19:07:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.166 19:07:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:49.166 19:07:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.166 19:07:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.166 [2024-11-27 19:07:58.693569] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:49.166 [2024-11-27 19:07:58.695721] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:49.166 [2024-11-27 19:07:58.695800] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:49.166 [2024-11-27 19:07:58.696013] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:49.166 [2024-11-27 19:07:58.696028] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:49.166 [2024-11-27 19:07:58.696294] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:09:49.166 [2024-11-27 19:07:58.696470] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:49.166 [2024-11-27 19:07:58.696484] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:49.166 [2024-11-27 19:07:58.696631] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:49.166 19:07:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.166 19:07:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:49.166 19:07:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:49.166 19:07:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:49.166 19:07:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:49.166 19:07:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:49.166 19:07:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:49.166 19:07:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:49.166 19:07:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:49.166 19:07:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:49.166 19:07:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:49.166 19:07:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:49.166 19:07:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:49.166 19:07:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.166 19:07:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.166 19:07:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.166 19:07:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:49.166 "name": "raid_bdev1", 00:09:49.166 "uuid": "986cb31b-31bd-42c2-ba20-9edbe03deb1b", 00:09:49.166 "strip_size_kb": 64, 00:09:49.166 "state": "online", 00:09:49.166 "raid_level": "raid0", 00:09:49.166 "superblock": true, 00:09:49.166 "num_base_bdevs": 3, 00:09:49.166 "num_base_bdevs_discovered": 3, 00:09:49.166 "num_base_bdevs_operational": 3, 00:09:49.166 "base_bdevs_list": [ 00:09:49.166 { 00:09:49.166 "name": "BaseBdev1", 00:09:49.166 "uuid": "05abfc00-4f96-51cb-8121-7bc57da642ba", 00:09:49.166 "is_configured": true, 00:09:49.166 "data_offset": 2048, 00:09:49.166 "data_size": 63488 00:09:49.166 }, 00:09:49.166 { 00:09:49.166 "name": "BaseBdev2", 00:09:49.166 "uuid": "b3289377-7521-5ac7-acfa-a039dc2a8326", 00:09:49.166 "is_configured": true, 00:09:49.166 "data_offset": 2048, 00:09:49.166 "data_size": 63488 00:09:49.166 }, 00:09:49.166 { 00:09:49.166 "name": "BaseBdev3", 00:09:49.166 "uuid": "b14ed476-a974-51da-ae7b-d5261cae8efa", 00:09:49.166 "is_configured": true, 00:09:49.166 "data_offset": 2048, 00:09:49.166 "data_size": 63488 00:09:49.166 } 00:09:49.166 ] 00:09:49.166 }' 00:09:49.166 19:07:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:49.166 19:07:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.737 19:07:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:49.737 19:07:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:49.737 [2024-11-27 19:07:59.190269] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:09:50.678 19:08:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:50.678 19:08:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.678 19:08:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.678 19:08:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.678 19:08:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:50.678 19:08:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:09:50.678 19:08:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:50.678 19:08:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:50.678 19:08:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:50.678 19:08:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:50.678 19:08:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:50.678 19:08:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:50.678 19:08:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:50.678 19:08:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:50.678 19:08:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:50.678 19:08:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:50.678 19:08:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:50.678 19:08:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:50.678 19:08:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.678 19:08:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:50.678 19:08:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.678 19:08:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.678 19:08:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:50.678 "name": "raid_bdev1", 00:09:50.678 "uuid": "986cb31b-31bd-42c2-ba20-9edbe03deb1b", 00:09:50.678 "strip_size_kb": 64, 00:09:50.678 "state": "online", 00:09:50.678 "raid_level": "raid0", 00:09:50.678 "superblock": true, 00:09:50.678 "num_base_bdevs": 3, 00:09:50.678 "num_base_bdevs_discovered": 3, 00:09:50.678 "num_base_bdevs_operational": 3, 00:09:50.678 "base_bdevs_list": [ 00:09:50.678 { 00:09:50.678 "name": "BaseBdev1", 00:09:50.678 "uuid": "05abfc00-4f96-51cb-8121-7bc57da642ba", 00:09:50.678 "is_configured": true, 00:09:50.678 "data_offset": 2048, 00:09:50.678 "data_size": 63488 00:09:50.678 }, 00:09:50.678 { 00:09:50.678 "name": "BaseBdev2", 00:09:50.678 "uuid": "b3289377-7521-5ac7-acfa-a039dc2a8326", 00:09:50.678 "is_configured": true, 00:09:50.678 "data_offset": 2048, 00:09:50.678 "data_size": 63488 00:09:50.678 }, 00:09:50.678 { 00:09:50.678 "name": "BaseBdev3", 00:09:50.678 "uuid": "b14ed476-a974-51da-ae7b-d5261cae8efa", 00:09:50.678 "is_configured": true, 00:09:50.678 "data_offset": 2048, 00:09:50.678 "data_size": 63488 00:09:50.678 } 00:09:50.678 ] 00:09:50.678 }' 00:09:50.678 19:08:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:50.678 19:08:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.937 19:08:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:50.937 19:08:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.937 19:08:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.937 [2024-11-27 19:08:00.526911] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:50.937 [2024-11-27 19:08:00.527020] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:50.937 [2024-11-27 19:08:00.529821] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:50.937 [2024-11-27 19:08:00.529915] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:50.938 [2024-11-27 19:08:00.530009] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:50.938 [2024-11-27 19:08:00.530055] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:50.938 19:08:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.938 19:08:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65415 00:09:50.938 19:08:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 65415 ']' 00:09:50.938 19:08:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 65415 00:09:50.938 { 00:09:50.938 "results": [ 00:09:50.938 { 00:09:50.938 "job": "raid_bdev1", 00:09:50.938 "core_mask": "0x1", 00:09:50.938 "workload": "randrw", 00:09:50.938 "percentage": 50, 00:09:50.938 "status": "finished", 00:09:50.938 "queue_depth": 1, 00:09:50.938 "io_size": 131072, 00:09:50.938 "runtime": 1.337207, 00:09:50.938 "iops": 13514.736312328607, 00:09:50.938 "mibps": 1689.3420390410759, 00:09:50.938 "io_failed": 1, 00:09:50.938 "io_timeout": 0, 00:09:50.938 "avg_latency_us": 103.98997447759777, 00:09:50.938 "min_latency_us": 25.7117903930131, 00:09:50.938 "max_latency_us": 1445.2262008733624 00:09:50.938 } 00:09:50.938 ], 00:09:50.938 "core_count": 1 00:09:50.938 } 00:09:50.938 19:08:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:09:50.938 19:08:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:50.938 19:08:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65415 00:09:50.938 19:08:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:50.938 19:08:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:50.938 killing process with pid 65415 00:09:50.938 19:08:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65415' 00:09:50.938 19:08:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 65415 00:09:50.938 [2024-11-27 19:08:00.563306] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:50.938 19:08:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 65415 00:09:51.196 [2024-11-27 19:08:00.812634] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:52.576 19:08:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:52.576 19:08:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.VQJIhUc0d3 00:09:52.576 19:08:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:52.576 19:08:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.75 00:09:52.576 19:08:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:09:52.576 19:08:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:52.576 19:08:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:52.576 ************************************ 00:09:52.576 END TEST raid_read_error_test 00:09:52.576 ************************************ 00:09:52.576 19:08:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.75 != \0\.\0\0 ]] 00:09:52.576 00:09:52.576 real 0m4.595s 00:09:52.576 user 0m5.223s 00:09:52.576 sys 0m0.713s 00:09:52.576 19:08:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:52.576 19:08:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.576 19:08:02 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 3 write 00:09:52.576 19:08:02 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:52.576 19:08:02 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:52.576 19:08:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:52.576 ************************************ 00:09:52.576 START TEST raid_write_error_test 00:09:52.576 ************************************ 00:09:52.576 19:08:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 3 write 00:09:52.576 19:08:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:09:52.577 19:08:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:52.577 19:08:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:09:52.577 19:08:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:52.577 19:08:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:52.577 19:08:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:52.577 19:08:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:52.577 19:08:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:52.577 19:08:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:52.577 19:08:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:52.577 19:08:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:52.577 19:08:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:52.577 19:08:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:52.577 19:08:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:52.577 19:08:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:52.577 19:08:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:52.577 19:08:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:52.577 19:08:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:52.577 19:08:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:52.577 19:08:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:52.577 19:08:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:52.577 19:08:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:09:52.577 19:08:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:52.577 19:08:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:52.577 19:08:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:52.577 19:08:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.EFyfvotmgb 00:09:52.577 19:08:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65561 00:09:52.577 19:08:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:52.577 19:08:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65561 00:09:52.577 19:08:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 65561 ']' 00:09:52.577 19:08:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:52.577 19:08:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:52.577 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:52.577 19:08:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:52.577 19:08:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:52.577 19:08:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.837 [2024-11-27 19:08:02.286129] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:09:52.837 [2024-11-27 19:08:02.286277] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65561 ] 00:09:52.837 [2024-11-27 19:08:02.451865] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:53.097 [2024-11-27 19:08:02.585079] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:53.355 [2024-11-27 19:08:02.832938] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:53.355 [2024-11-27 19:08:02.833030] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:53.615 19:08:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:53.615 19:08:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:53.615 19:08:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:53.615 19:08:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:53.615 19:08:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.615 19:08:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.615 BaseBdev1_malloc 00:09:53.615 19:08:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.615 19:08:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:53.615 19:08:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.615 19:08:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.615 true 00:09:53.615 19:08:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.615 19:08:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:53.615 19:08:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.615 19:08:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.615 [2024-11-27 19:08:03.192634] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:53.615 [2024-11-27 19:08:03.192767] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:53.615 [2024-11-27 19:08:03.192808] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:53.615 [2024-11-27 19:08:03.192842] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:53.615 [2024-11-27 19:08:03.195251] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:53.615 [2024-11-27 19:08:03.195347] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:53.615 BaseBdev1 00:09:53.615 19:08:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.615 19:08:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:53.615 19:08:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:53.615 19:08:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.615 19:08:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.874 BaseBdev2_malloc 00:09:53.874 19:08:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.874 19:08:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:53.874 19:08:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.874 19:08:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.874 true 00:09:53.874 19:08:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.874 19:08:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:53.874 19:08:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.874 19:08:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.874 [2024-11-27 19:08:03.269909] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:53.874 [2024-11-27 19:08:03.270011] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:53.874 [2024-11-27 19:08:03.270033] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:53.874 [2024-11-27 19:08:03.270045] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:53.874 [2024-11-27 19:08:03.272530] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:53.874 [2024-11-27 19:08:03.272572] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:53.874 BaseBdev2 00:09:53.874 19:08:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.874 19:08:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:53.874 19:08:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:53.874 19:08:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.874 19:08:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.874 BaseBdev3_malloc 00:09:53.874 19:08:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.874 19:08:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:53.874 19:08:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.874 19:08:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.874 true 00:09:53.874 19:08:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.874 19:08:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:53.874 19:08:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.874 19:08:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.874 [2024-11-27 19:08:03.357645] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:53.874 [2024-11-27 19:08:03.357723] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:53.874 [2024-11-27 19:08:03.357749] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:53.874 [2024-11-27 19:08:03.357761] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:53.874 [2024-11-27 19:08:03.360401] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:53.874 [2024-11-27 19:08:03.360450] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:53.874 BaseBdev3 00:09:53.874 19:08:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.874 19:08:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:53.874 19:08:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.874 19:08:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.874 [2024-11-27 19:08:03.369721] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:53.874 [2024-11-27 19:08:03.371945] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:53.874 [2024-11-27 19:08:03.372097] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:53.874 [2024-11-27 19:08:03.372362] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:53.874 [2024-11-27 19:08:03.372413] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:53.874 [2024-11-27 19:08:03.372737] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:09:53.874 [2024-11-27 19:08:03.372961] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:53.874 [2024-11-27 19:08:03.373008] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:53.874 [2024-11-27 19:08:03.373219] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:53.874 19:08:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.874 19:08:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:53.874 19:08:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:53.874 19:08:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:53.874 19:08:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:53.874 19:08:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:53.874 19:08:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:53.874 19:08:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:53.874 19:08:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:53.874 19:08:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:53.874 19:08:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:53.874 19:08:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:53.874 19:08:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.874 19:08:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:53.874 19:08:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.874 19:08:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.874 19:08:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:53.874 "name": "raid_bdev1", 00:09:53.874 "uuid": "ba0282f1-1844-43c2-9798-2cc0ff8c218f", 00:09:53.874 "strip_size_kb": 64, 00:09:53.874 "state": "online", 00:09:53.874 "raid_level": "raid0", 00:09:53.874 "superblock": true, 00:09:53.874 "num_base_bdevs": 3, 00:09:53.874 "num_base_bdevs_discovered": 3, 00:09:53.874 "num_base_bdevs_operational": 3, 00:09:53.874 "base_bdevs_list": [ 00:09:53.874 { 00:09:53.874 "name": "BaseBdev1", 00:09:53.874 "uuid": "98d28974-a729-5b0f-8576-6343ca8e1e0f", 00:09:53.874 "is_configured": true, 00:09:53.874 "data_offset": 2048, 00:09:53.874 "data_size": 63488 00:09:53.874 }, 00:09:53.874 { 00:09:53.874 "name": "BaseBdev2", 00:09:53.874 "uuid": "69b68f79-28ed-5b49-903a-aab2ee81961c", 00:09:53.874 "is_configured": true, 00:09:53.874 "data_offset": 2048, 00:09:53.874 "data_size": 63488 00:09:53.874 }, 00:09:53.874 { 00:09:53.874 "name": "BaseBdev3", 00:09:53.874 "uuid": "56cab4b6-40a0-5ba3-bc36-54e5f53e0c8a", 00:09:53.874 "is_configured": true, 00:09:53.874 "data_offset": 2048, 00:09:53.874 "data_size": 63488 00:09:53.874 } 00:09:53.874 ] 00:09:53.874 }' 00:09:53.874 19:08:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:53.874 19:08:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.444 19:08:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:54.444 19:08:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:54.444 [2024-11-27 19:08:03.946153] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:09:55.387 19:08:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:55.387 19:08:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.387 19:08:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.387 19:08:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.387 19:08:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:55.387 19:08:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:09:55.387 19:08:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:55.387 19:08:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:55.387 19:08:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:55.387 19:08:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:55.387 19:08:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:55.387 19:08:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:55.387 19:08:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:55.387 19:08:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:55.387 19:08:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:55.387 19:08:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:55.387 19:08:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:55.387 19:08:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:55.387 19:08:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:55.387 19:08:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.387 19:08:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.387 19:08:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.387 19:08:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:55.387 "name": "raid_bdev1", 00:09:55.387 "uuid": "ba0282f1-1844-43c2-9798-2cc0ff8c218f", 00:09:55.387 "strip_size_kb": 64, 00:09:55.387 "state": "online", 00:09:55.387 "raid_level": "raid0", 00:09:55.387 "superblock": true, 00:09:55.387 "num_base_bdevs": 3, 00:09:55.387 "num_base_bdevs_discovered": 3, 00:09:55.387 "num_base_bdevs_operational": 3, 00:09:55.387 "base_bdevs_list": [ 00:09:55.387 { 00:09:55.387 "name": "BaseBdev1", 00:09:55.387 "uuid": "98d28974-a729-5b0f-8576-6343ca8e1e0f", 00:09:55.387 "is_configured": true, 00:09:55.387 "data_offset": 2048, 00:09:55.387 "data_size": 63488 00:09:55.387 }, 00:09:55.387 { 00:09:55.387 "name": "BaseBdev2", 00:09:55.387 "uuid": "69b68f79-28ed-5b49-903a-aab2ee81961c", 00:09:55.387 "is_configured": true, 00:09:55.387 "data_offset": 2048, 00:09:55.387 "data_size": 63488 00:09:55.387 }, 00:09:55.387 { 00:09:55.387 "name": "BaseBdev3", 00:09:55.387 "uuid": "56cab4b6-40a0-5ba3-bc36-54e5f53e0c8a", 00:09:55.387 "is_configured": true, 00:09:55.387 "data_offset": 2048, 00:09:55.387 "data_size": 63488 00:09:55.387 } 00:09:55.387 ] 00:09:55.387 }' 00:09:55.387 19:08:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:55.387 19:08:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.647 19:08:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:55.647 19:08:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.647 19:08:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.647 [2024-11-27 19:08:05.234609] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:55.647 [2024-11-27 19:08:05.234725] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:55.647 [2024-11-27 19:08:05.237675] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:55.647 [2024-11-27 19:08:05.237766] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:55.647 [2024-11-27 19:08:05.237840] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:55.647 [2024-11-27 19:08:05.237883] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:55.647 { 00:09:55.647 "results": [ 00:09:55.647 { 00:09:55.647 "job": "raid_bdev1", 00:09:55.647 "core_mask": "0x1", 00:09:55.647 "workload": "randrw", 00:09:55.647 "percentage": 50, 00:09:55.647 "status": "finished", 00:09:55.647 "queue_depth": 1, 00:09:55.647 "io_size": 131072, 00:09:55.647 "runtime": 1.288775, 00:09:55.647 "iops": 13394.114566157785, 00:09:55.647 "mibps": 1674.264320769723, 00:09:55.647 "io_failed": 1, 00:09:55.647 "io_timeout": 0, 00:09:55.647 "avg_latency_us": 105.05518210818656, 00:09:55.647 "min_latency_us": 22.69344978165939, 00:09:55.647 "max_latency_us": 1395.1441048034935 00:09:55.647 } 00:09:55.647 ], 00:09:55.647 "core_count": 1 00:09:55.647 } 00:09:55.647 19:08:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.647 19:08:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65561 00:09:55.647 19:08:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 65561 ']' 00:09:55.647 19:08:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 65561 00:09:55.647 19:08:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:09:55.647 19:08:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:55.647 19:08:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65561 00:09:55.647 19:08:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:55.647 19:08:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:55.647 19:08:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65561' 00:09:55.647 killing process with pid 65561 00:09:55.647 19:08:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 65561 00:09:55.647 [2024-11-27 19:08:05.274561] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:55.647 19:08:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 65561 00:09:55.907 [2024-11-27 19:08:05.527053] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:57.287 19:08:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:57.287 19:08:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.EFyfvotmgb 00:09:57.287 19:08:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:57.287 19:08:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.78 00:09:57.287 19:08:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:09:57.287 19:08:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:57.287 19:08:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:57.287 19:08:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.78 != \0\.\0\0 ]] 00:09:57.287 ************************************ 00:09:57.287 END TEST raid_write_error_test 00:09:57.287 ************************************ 00:09:57.287 00:09:57.287 real 0m4.637s 00:09:57.287 user 0m5.283s 00:09:57.287 sys 0m0.706s 00:09:57.287 19:08:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:57.287 19:08:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.287 19:08:06 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:09:57.287 19:08:06 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:09:57.287 19:08:06 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:57.287 19:08:06 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:57.287 19:08:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:57.287 ************************************ 00:09:57.287 START TEST raid_state_function_test 00:09:57.287 ************************************ 00:09:57.287 19:08:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 3 false 00:09:57.287 19:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:09:57.287 19:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:57.287 19:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:09:57.287 19:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:57.287 19:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:57.287 19:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:57.287 19:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:57.287 19:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:57.287 19:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:57.287 19:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:57.287 19:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:57.287 19:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:57.287 19:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:57.287 19:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:57.287 19:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:57.287 19:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:57.287 19:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:57.287 19:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:57.287 19:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:57.287 19:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:57.287 19:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:57.287 19:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:09:57.287 19:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:57.287 19:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:57.287 19:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:09:57.287 19:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:09:57.287 Process raid pid: 65705 00:09:57.287 19:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=65705 00:09:57.287 19:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:57.287 19:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 65705' 00:09:57.287 19:08:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 65705 00:09:57.288 19:08:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 65705 ']' 00:09:57.288 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:57.288 19:08:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:57.288 19:08:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:57.288 19:08:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:57.288 19:08:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:57.288 19:08:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.548 [2024-11-27 19:08:06.986901] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:09:57.548 [2024-11-27 19:08:06.987052] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:57.548 [2024-11-27 19:08:07.169107] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:57.807 [2024-11-27 19:08:07.304507] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:58.067 [2024-11-27 19:08:07.546241] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:58.067 [2024-11-27 19:08:07.546287] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:58.327 19:08:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:58.327 19:08:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:09:58.327 19:08:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:58.327 19:08:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.327 19:08:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.327 [2024-11-27 19:08:07.818530] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:58.327 [2024-11-27 19:08:07.818644] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:58.327 [2024-11-27 19:08:07.818676] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:58.327 [2024-11-27 19:08:07.818713] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:58.327 [2024-11-27 19:08:07.818734] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:58.327 [2024-11-27 19:08:07.818755] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:58.327 19:08:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.327 19:08:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:58.327 19:08:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:58.327 19:08:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:58.327 19:08:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:58.327 19:08:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:58.327 19:08:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:58.327 19:08:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:58.327 19:08:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:58.327 19:08:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:58.327 19:08:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:58.327 19:08:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:58.327 19:08:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:58.327 19:08:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.327 19:08:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.327 19:08:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.327 19:08:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:58.327 "name": "Existed_Raid", 00:09:58.327 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:58.327 "strip_size_kb": 64, 00:09:58.327 "state": "configuring", 00:09:58.327 "raid_level": "concat", 00:09:58.327 "superblock": false, 00:09:58.327 "num_base_bdevs": 3, 00:09:58.327 "num_base_bdevs_discovered": 0, 00:09:58.327 "num_base_bdevs_operational": 3, 00:09:58.327 "base_bdevs_list": [ 00:09:58.327 { 00:09:58.327 "name": "BaseBdev1", 00:09:58.327 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:58.327 "is_configured": false, 00:09:58.327 "data_offset": 0, 00:09:58.327 "data_size": 0 00:09:58.327 }, 00:09:58.327 { 00:09:58.327 "name": "BaseBdev2", 00:09:58.327 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:58.327 "is_configured": false, 00:09:58.327 "data_offset": 0, 00:09:58.327 "data_size": 0 00:09:58.327 }, 00:09:58.327 { 00:09:58.327 "name": "BaseBdev3", 00:09:58.327 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:58.327 "is_configured": false, 00:09:58.327 "data_offset": 0, 00:09:58.327 "data_size": 0 00:09:58.327 } 00:09:58.327 ] 00:09:58.327 }' 00:09:58.327 19:08:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:58.327 19:08:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.897 19:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:58.897 19:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.897 19:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.897 [2024-11-27 19:08:08.229778] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:58.897 [2024-11-27 19:08:08.229874] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:58.897 19:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.897 19:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:58.897 19:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.897 19:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.897 [2024-11-27 19:08:08.241758] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:58.897 [2024-11-27 19:08:08.241842] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:58.897 [2024-11-27 19:08:08.241870] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:58.897 [2024-11-27 19:08:08.241893] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:58.897 [2024-11-27 19:08:08.241911] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:58.897 [2024-11-27 19:08:08.241932] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:58.897 19:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.897 19:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:58.897 19:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.897 19:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.897 [2024-11-27 19:08:08.295341] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:58.897 BaseBdev1 00:09:58.897 19:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.897 19:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:58.897 19:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:58.897 19:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:58.897 19:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:58.897 19:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:58.897 19:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:58.897 19:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:58.897 19:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.897 19:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.897 19:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.897 19:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:58.897 19:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.897 19:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.897 [ 00:09:58.897 { 00:09:58.897 "name": "BaseBdev1", 00:09:58.897 "aliases": [ 00:09:58.897 "1ba348c6-b0b9-47d5-a7cb-f3cd01f2ac93" 00:09:58.897 ], 00:09:58.897 "product_name": "Malloc disk", 00:09:58.897 "block_size": 512, 00:09:58.897 "num_blocks": 65536, 00:09:58.897 "uuid": "1ba348c6-b0b9-47d5-a7cb-f3cd01f2ac93", 00:09:58.897 "assigned_rate_limits": { 00:09:58.897 "rw_ios_per_sec": 0, 00:09:58.897 "rw_mbytes_per_sec": 0, 00:09:58.897 "r_mbytes_per_sec": 0, 00:09:58.897 "w_mbytes_per_sec": 0 00:09:58.897 }, 00:09:58.897 "claimed": true, 00:09:58.897 "claim_type": "exclusive_write", 00:09:58.897 "zoned": false, 00:09:58.897 "supported_io_types": { 00:09:58.897 "read": true, 00:09:58.897 "write": true, 00:09:58.897 "unmap": true, 00:09:58.897 "flush": true, 00:09:58.897 "reset": true, 00:09:58.897 "nvme_admin": false, 00:09:58.897 "nvme_io": false, 00:09:58.897 "nvme_io_md": false, 00:09:58.897 "write_zeroes": true, 00:09:58.897 "zcopy": true, 00:09:58.897 "get_zone_info": false, 00:09:58.897 "zone_management": false, 00:09:58.897 "zone_append": false, 00:09:58.897 "compare": false, 00:09:58.897 "compare_and_write": false, 00:09:58.897 "abort": true, 00:09:58.897 "seek_hole": false, 00:09:58.897 "seek_data": false, 00:09:58.897 "copy": true, 00:09:58.897 "nvme_iov_md": false 00:09:58.897 }, 00:09:58.897 "memory_domains": [ 00:09:58.897 { 00:09:58.897 "dma_device_id": "system", 00:09:58.897 "dma_device_type": 1 00:09:58.897 }, 00:09:58.897 { 00:09:58.897 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:58.897 "dma_device_type": 2 00:09:58.897 } 00:09:58.897 ], 00:09:58.897 "driver_specific": {} 00:09:58.897 } 00:09:58.897 ] 00:09:58.897 19:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.897 19:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:58.897 19:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:58.897 19:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:58.897 19:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:58.897 19:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:58.897 19:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:58.897 19:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:58.897 19:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:58.897 19:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:58.897 19:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:58.897 19:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:58.897 19:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:58.897 19:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.897 19:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.897 19:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:58.897 19:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.897 19:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:58.897 "name": "Existed_Raid", 00:09:58.897 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:58.897 "strip_size_kb": 64, 00:09:58.897 "state": "configuring", 00:09:58.897 "raid_level": "concat", 00:09:58.897 "superblock": false, 00:09:58.897 "num_base_bdevs": 3, 00:09:58.897 "num_base_bdevs_discovered": 1, 00:09:58.897 "num_base_bdevs_operational": 3, 00:09:58.897 "base_bdevs_list": [ 00:09:58.897 { 00:09:58.897 "name": "BaseBdev1", 00:09:58.897 "uuid": "1ba348c6-b0b9-47d5-a7cb-f3cd01f2ac93", 00:09:58.897 "is_configured": true, 00:09:58.897 "data_offset": 0, 00:09:58.897 "data_size": 65536 00:09:58.897 }, 00:09:58.897 { 00:09:58.897 "name": "BaseBdev2", 00:09:58.897 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:58.897 "is_configured": false, 00:09:58.897 "data_offset": 0, 00:09:58.897 "data_size": 0 00:09:58.897 }, 00:09:58.897 { 00:09:58.897 "name": "BaseBdev3", 00:09:58.897 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:58.897 "is_configured": false, 00:09:58.897 "data_offset": 0, 00:09:58.897 "data_size": 0 00:09:58.897 } 00:09:58.897 ] 00:09:58.897 }' 00:09:58.897 19:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:58.897 19:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.158 19:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:59.158 19:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.158 19:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.158 [2024-11-27 19:08:08.770607] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:59.158 [2024-11-27 19:08:08.770739] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:59.158 19:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.158 19:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:59.158 19:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.158 19:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.158 [2024-11-27 19:08:08.782627] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:59.158 [2024-11-27 19:08:08.784926] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:59.158 [2024-11-27 19:08:08.785014] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:59.158 [2024-11-27 19:08:08.785043] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:59.158 [2024-11-27 19:08:08.785066] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:59.158 19:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.158 19:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:59.158 19:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:59.158 19:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:59.158 19:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:59.158 19:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:59.158 19:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:59.158 19:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:59.158 19:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:59.158 19:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:59.158 19:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:59.158 19:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:59.158 19:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:59.418 19:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:59.418 19:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:59.418 19:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.418 19:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.418 19:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.418 19:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:59.418 "name": "Existed_Raid", 00:09:59.418 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:59.418 "strip_size_kb": 64, 00:09:59.418 "state": "configuring", 00:09:59.418 "raid_level": "concat", 00:09:59.418 "superblock": false, 00:09:59.418 "num_base_bdevs": 3, 00:09:59.418 "num_base_bdevs_discovered": 1, 00:09:59.418 "num_base_bdevs_operational": 3, 00:09:59.418 "base_bdevs_list": [ 00:09:59.418 { 00:09:59.418 "name": "BaseBdev1", 00:09:59.418 "uuid": "1ba348c6-b0b9-47d5-a7cb-f3cd01f2ac93", 00:09:59.418 "is_configured": true, 00:09:59.418 "data_offset": 0, 00:09:59.418 "data_size": 65536 00:09:59.418 }, 00:09:59.418 { 00:09:59.418 "name": "BaseBdev2", 00:09:59.418 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:59.418 "is_configured": false, 00:09:59.418 "data_offset": 0, 00:09:59.418 "data_size": 0 00:09:59.418 }, 00:09:59.418 { 00:09:59.418 "name": "BaseBdev3", 00:09:59.418 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:59.418 "is_configured": false, 00:09:59.418 "data_offset": 0, 00:09:59.418 "data_size": 0 00:09:59.418 } 00:09:59.418 ] 00:09:59.418 }' 00:09:59.418 19:08:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:59.418 19:08:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.677 19:08:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:59.677 19:08:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.677 19:08:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.677 [2024-11-27 19:08:09.300554] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:59.677 BaseBdev2 00:09:59.677 19:08:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.677 19:08:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:59.677 19:08:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:59.677 19:08:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:59.677 19:08:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:59.677 19:08:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:59.677 19:08:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:59.677 19:08:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:59.677 19:08:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.677 19:08:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.935 19:08:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.935 19:08:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:59.935 19:08:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.935 19:08:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.935 [ 00:09:59.935 { 00:09:59.935 "name": "BaseBdev2", 00:09:59.935 "aliases": [ 00:09:59.935 "c573a876-ef9b-43dd-af3b-2546938f3c7f" 00:09:59.935 ], 00:09:59.935 "product_name": "Malloc disk", 00:09:59.935 "block_size": 512, 00:09:59.935 "num_blocks": 65536, 00:09:59.935 "uuid": "c573a876-ef9b-43dd-af3b-2546938f3c7f", 00:09:59.935 "assigned_rate_limits": { 00:09:59.935 "rw_ios_per_sec": 0, 00:09:59.935 "rw_mbytes_per_sec": 0, 00:09:59.935 "r_mbytes_per_sec": 0, 00:09:59.935 "w_mbytes_per_sec": 0 00:09:59.935 }, 00:09:59.935 "claimed": true, 00:09:59.935 "claim_type": "exclusive_write", 00:09:59.935 "zoned": false, 00:09:59.935 "supported_io_types": { 00:09:59.935 "read": true, 00:09:59.935 "write": true, 00:09:59.935 "unmap": true, 00:09:59.935 "flush": true, 00:09:59.935 "reset": true, 00:09:59.935 "nvme_admin": false, 00:09:59.935 "nvme_io": false, 00:09:59.935 "nvme_io_md": false, 00:09:59.935 "write_zeroes": true, 00:09:59.935 "zcopy": true, 00:09:59.935 "get_zone_info": false, 00:09:59.935 "zone_management": false, 00:09:59.935 "zone_append": false, 00:09:59.935 "compare": false, 00:09:59.935 "compare_and_write": false, 00:09:59.935 "abort": true, 00:09:59.935 "seek_hole": false, 00:09:59.935 "seek_data": false, 00:09:59.935 "copy": true, 00:09:59.935 "nvme_iov_md": false 00:09:59.935 }, 00:09:59.935 "memory_domains": [ 00:09:59.935 { 00:09:59.935 "dma_device_id": "system", 00:09:59.935 "dma_device_type": 1 00:09:59.935 }, 00:09:59.935 { 00:09:59.935 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:59.935 "dma_device_type": 2 00:09:59.935 } 00:09:59.935 ], 00:09:59.935 "driver_specific": {} 00:09:59.935 } 00:09:59.935 ] 00:09:59.935 19:08:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.935 19:08:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:59.935 19:08:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:59.935 19:08:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:59.935 19:08:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:59.935 19:08:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:59.935 19:08:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:59.935 19:08:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:59.935 19:08:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:59.935 19:08:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:59.935 19:08:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:59.935 19:08:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:59.935 19:08:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:59.935 19:08:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:59.935 19:08:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:59.935 19:08:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:59.935 19:08:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.935 19:08:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.935 19:08:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.935 19:08:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:59.935 "name": "Existed_Raid", 00:09:59.935 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:59.935 "strip_size_kb": 64, 00:09:59.935 "state": "configuring", 00:09:59.935 "raid_level": "concat", 00:09:59.935 "superblock": false, 00:09:59.935 "num_base_bdevs": 3, 00:09:59.935 "num_base_bdevs_discovered": 2, 00:09:59.935 "num_base_bdevs_operational": 3, 00:09:59.935 "base_bdevs_list": [ 00:09:59.935 { 00:09:59.935 "name": "BaseBdev1", 00:09:59.935 "uuid": "1ba348c6-b0b9-47d5-a7cb-f3cd01f2ac93", 00:09:59.935 "is_configured": true, 00:09:59.935 "data_offset": 0, 00:09:59.935 "data_size": 65536 00:09:59.935 }, 00:09:59.935 { 00:09:59.935 "name": "BaseBdev2", 00:09:59.935 "uuid": "c573a876-ef9b-43dd-af3b-2546938f3c7f", 00:09:59.935 "is_configured": true, 00:09:59.935 "data_offset": 0, 00:09:59.935 "data_size": 65536 00:09:59.935 }, 00:09:59.935 { 00:09:59.935 "name": "BaseBdev3", 00:09:59.935 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:59.935 "is_configured": false, 00:09:59.935 "data_offset": 0, 00:09:59.935 "data_size": 0 00:09:59.935 } 00:09:59.935 ] 00:09:59.935 }' 00:09:59.935 19:08:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:59.935 19:08:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.195 19:08:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:00.195 19:08:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.195 19:08:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.195 [2024-11-27 19:08:09.821653] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:00.195 [2024-11-27 19:08:09.821844] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:00.195 [2024-11-27 19:08:09.821880] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:10:00.195 [2024-11-27 19:08:09.822234] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:00.195 [2024-11-27 19:08:09.822480] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:00.195 [2024-11-27 19:08:09.822524] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:00.195 [2024-11-27 19:08:09.822857] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:00.195 BaseBdev3 00:10:00.195 19:08:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.195 19:08:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:00.195 19:08:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:00.195 19:08:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:00.195 19:08:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:00.195 19:08:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:00.195 19:08:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:00.195 19:08:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:00.195 19:08:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.195 19:08:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.454 19:08:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.454 19:08:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:00.454 19:08:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.454 19:08:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.454 [ 00:10:00.454 { 00:10:00.454 "name": "BaseBdev3", 00:10:00.454 "aliases": [ 00:10:00.454 "cfa047f6-6cfa-498e-9801-8559591bac5b" 00:10:00.454 ], 00:10:00.454 "product_name": "Malloc disk", 00:10:00.454 "block_size": 512, 00:10:00.454 "num_blocks": 65536, 00:10:00.454 "uuid": "cfa047f6-6cfa-498e-9801-8559591bac5b", 00:10:00.455 "assigned_rate_limits": { 00:10:00.455 "rw_ios_per_sec": 0, 00:10:00.455 "rw_mbytes_per_sec": 0, 00:10:00.455 "r_mbytes_per_sec": 0, 00:10:00.455 "w_mbytes_per_sec": 0 00:10:00.455 }, 00:10:00.455 "claimed": true, 00:10:00.455 "claim_type": "exclusive_write", 00:10:00.455 "zoned": false, 00:10:00.455 "supported_io_types": { 00:10:00.455 "read": true, 00:10:00.455 "write": true, 00:10:00.455 "unmap": true, 00:10:00.455 "flush": true, 00:10:00.455 "reset": true, 00:10:00.455 "nvme_admin": false, 00:10:00.455 "nvme_io": false, 00:10:00.455 "nvme_io_md": false, 00:10:00.455 "write_zeroes": true, 00:10:00.455 "zcopy": true, 00:10:00.455 "get_zone_info": false, 00:10:00.455 "zone_management": false, 00:10:00.455 "zone_append": false, 00:10:00.455 "compare": false, 00:10:00.455 "compare_and_write": false, 00:10:00.455 "abort": true, 00:10:00.455 "seek_hole": false, 00:10:00.455 "seek_data": false, 00:10:00.455 "copy": true, 00:10:00.455 "nvme_iov_md": false 00:10:00.455 }, 00:10:00.455 "memory_domains": [ 00:10:00.455 { 00:10:00.455 "dma_device_id": "system", 00:10:00.455 "dma_device_type": 1 00:10:00.455 }, 00:10:00.455 { 00:10:00.455 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:00.455 "dma_device_type": 2 00:10:00.455 } 00:10:00.455 ], 00:10:00.455 "driver_specific": {} 00:10:00.455 } 00:10:00.455 ] 00:10:00.455 19:08:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.455 19:08:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:00.455 19:08:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:00.455 19:08:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:00.455 19:08:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:10:00.455 19:08:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:00.455 19:08:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:00.455 19:08:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:00.455 19:08:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:00.455 19:08:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:00.455 19:08:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:00.455 19:08:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:00.455 19:08:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:00.455 19:08:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:00.455 19:08:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:00.455 19:08:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.455 19:08:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.455 19:08:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:00.455 19:08:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.455 19:08:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:00.455 "name": "Existed_Raid", 00:10:00.455 "uuid": "5d108a45-aacf-4286-8f9b-2b9fbf6c4b6a", 00:10:00.455 "strip_size_kb": 64, 00:10:00.455 "state": "online", 00:10:00.455 "raid_level": "concat", 00:10:00.455 "superblock": false, 00:10:00.455 "num_base_bdevs": 3, 00:10:00.455 "num_base_bdevs_discovered": 3, 00:10:00.455 "num_base_bdevs_operational": 3, 00:10:00.455 "base_bdevs_list": [ 00:10:00.455 { 00:10:00.455 "name": "BaseBdev1", 00:10:00.455 "uuid": "1ba348c6-b0b9-47d5-a7cb-f3cd01f2ac93", 00:10:00.455 "is_configured": true, 00:10:00.455 "data_offset": 0, 00:10:00.455 "data_size": 65536 00:10:00.455 }, 00:10:00.455 { 00:10:00.455 "name": "BaseBdev2", 00:10:00.455 "uuid": "c573a876-ef9b-43dd-af3b-2546938f3c7f", 00:10:00.455 "is_configured": true, 00:10:00.455 "data_offset": 0, 00:10:00.455 "data_size": 65536 00:10:00.455 }, 00:10:00.455 { 00:10:00.455 "name": "BaseBdev3", 00:10:00.455 "uuid": "cfa047f6-6cfa-498e-9801-8559591bac5b", 00:10:00.455 "is_configured": true, 00:10:00.455 "data_offset": 0, 00:10:00.455 "data_size": 65536 00:10:00.455 } 00:10:00.455 ] 00:10:00.455 }' 00:10:00.455 19:08:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:00.455 19:08:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.715 19:08:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:00.715 19:08:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:00.715 19:08:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:00.715 19:08:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:00.715 19:08:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:00.715 19:08:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:00.715 19:08:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:00.715 19:08:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.715 19:08:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.715 19:08:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:00.715 [2024-11-27 19:08:10.289207] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:00.715 19:08:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.715 19:08:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:00.715 "name": "Existed_Raid", 00:10:00.715 "aliases": [ 00:10:00.715 "5d108a45-aacf-4286-8f9b-2b9fbf6c4b6a" 00:10:00.715 ], 00:10:00.715 "product_name": "Raid Volume", 00:10:00.715 "block_size": 512, 00:10:00.715 "num_blocks": 196608, 00:10:00.715 "uuid": "5d108a45-aacf-4286-8f9b-2b9fbf6c4b6a", 00:10:00.715 "assigned_rate_limits": { 00:10:00.715 "rw_ios_per_sec": 0, 00:10:00.715 "rw_mbytes_per_sec": 0, 00:10:00.715 "r_mbytes_per_sec": 0, 00:10:00.715 "w_mbytes_per_sec": 0 00:10:00.715 }, 00:10:00.715 "claimed": false, 00:10:00.715 "zoned": false, 00:10:00.715 "supported_io_types": { 00:10:00.715 "read": true, 00:10:00.715 "write": true, 00:10:00.715 "unmap": true, 00:10:00.715 "flush": true, 00:10:00.715 "reset": true, 00:10:00.715 "nvme_admin": false, 00:10:00.715 "nvme_io": false, 00:10:00.715 "nvme_io_md": false, 00:10:00.715 "write_zeroes": true, 00:10:00.715 "zcopy": false, 00:10:00.715 "get_zone_info": false, 00:10:00.715 "zone_management": false, 00:10:00.715 "zone_append": false, 00:10:00.715 "compare": false, 00:10:00.715 "compare_and_write": false, 00:10:00.715 "abort": false, 00:10:00.715 "seek_hole": false, 00:10:00.715 "seek_data": false, 00:10:00.715 "copy": false, 00:10:00.715 "nvme_iov_md": false 00:10:00.715 }, 00:10:00.715 "memory_domains": [ 00:10:00.715 { 00:10:00.715 "dma_device_id": "system", 00:10:00.715 "dma_device_type": 1 00:10:00.715 }, 00:10:00.715 { 00:10:00.715 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:00.715 "dma_device_type": 2 00:10:00.715 }, 00:10:00.715 { 00:10:00.715 "dma_device_id": "system", 00:10:00.715 "dma_device_type": 1 00:10:00.715 }, 00:10:00.715 { 00:10:00.715 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:00.715 "dma_device_type": 2 00:10:00.715 }, 00:10:00.715 { 00:10:00.715 "dma_device_id": "system", 00:10:00.715 "dma_device_type": 1 00:10:00.715 }, 00:10:00.715 { 00:10:00.715 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:00.715 "dma_device_type": 2 00:10:00.715 } 00:10:00.715 ], 00:10:00.715 "driver_specific": { 00:10:00.715 "raid": { 00:10:00.715 "uuid": "5d108a45-aacf-4286-8f9b-2b9fbf6c4b6a", 00:10:00.715 "strip_size_kb": 64, 00:10:00.715 "state": "online", 00:10:00.715 "raid_level": "concat", 00:10:00.715 "superblock": false, 00:10:00.715 "num_base_bdevs": 3, 00:10:00.715 "num_base_bdevs_discovered": 3, 00:10:00.715 "num_base_bdevs_operational": 3, 00:10:00.715 "base_bdevs_list": [ 00:10:00.715 { 00:10:00.715 "name": "BaseBdev1", 00:10:00.715 "uuid": "1ba348c6-b0b9-47d5-a7cb-f3cd01f2ac93", 00:10:00.715 "is_configured": true, 00:10:00.715 "data_offset": 0, 00:10:00.715 "data_size": 65536 00:10:00.715 }, 00:10:00.715 { 00:10:00.715 "name": "BaseBdev2", 00:10:00.715 "uuid": "c573a876-ef9b-43dd-af3b-2546938f3c7f", 00:10:00.715 "is_configured": true, 00:10:00.715 "data_offset": 0, 00:10:00.715 "data_size": 65536 00:10:00.715 }, 00:10:00.715 { 00:10:00.715 "name": "BaseBdev3", 00:10:00.715 "uuid": "cfa047f6-6cfa-498e-9801-8559591bac5b", 00:10:00.715 "is_configured": true, 00:10:00.715 "data_offset": 0, 00:10:00.715 "data_size": 65536 00:10:00.715 } 00:10:00.715 ] 00:10:00.715 } 00:10:00.715 } 00:10:00.715 }' 00:10:00.715 19:08:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:00.975 19:08:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:00.975 BaseBdev2 00:10:00.975 BaseBdev3' 00:10:00.975 19:08:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:00.975 19:08:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:00.975 19:08:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:00.975 19:08:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:00.975 19:08:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:00.975 19:08:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.975 19:08:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.975 19:08:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.975 19:08:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:00.975 19:08:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:00.975 19:08:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:00.975 19:08:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:00.975 19:08:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.975 19:08:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.975 19:08:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:00.975 19:08:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.975 19:08:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:00.975 19:08:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:00.975 19:08:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:00.975 19:08:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:00.975 19:08:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:00.975 19:08:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.975 19:08:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.975 19:08:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.975 19:08:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:00.975 19:08:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:00.975 19:08:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:00.975 19:08:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.975 19:08:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.975 [2024-11-27 19:08:10.588447] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:00.975 [2024-11-27 19:08:10.588523] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:00.975 [2024-11-27 19:08:10.588606] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:01.234 19:08:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.234 19:08:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:01.234 19:08:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:10:01.234 19:08:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:01.234 19:08:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:01.234 19:08:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:01.234 19:08:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:10:01.234 19:08:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:01.234 19:08:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:01.234 19:08:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:01.234 19:08:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:01.234 19:08:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:01.234 19:08:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:01.234 19:08:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:01.234 19:08:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:01.234 19:08:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:01.234 19:08:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:01.234 19:08:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.234 19:08:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:01.234 19:08:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.234 19:08:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.234 19:08:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:01.234 "name": "Existed_Raid", 00:10:01.234 "uuid": "5d108a45-aacf-4286-8f9b-2b9fbf6c4b6a", 00:10:01.234 "strip_size_kb": 64, 00:10:01.234 "state": "offline", 00:10:01.234 "raid_level": "concat", 00:10:01.234 "superblock": false, 00:10:01.234 "num_base_bdevs": 3, 00:10:01.234 "num_base_bdevs_discovered": 2, 00:10:01.235 "num_base_bdevs_operational": 2, 00:10:01.235 "base_bdevs_list": [ 00:10:01.235 { 00:10:01.235 "name": null, 00:10:01.235 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:01.235 "is_configured": false, 00:10:01.235 "data_offset": 0, 00:10:01.235 "data_size": 65536 00:10:01.235 }, 00:10:01.235 { 00:10:01.235 "name": "BaseBdev2", 00:10:01.235 "uuid": "c573a876-ef9b-43dd-af3b-2546938f3c7f", 00:10:01.235 "is_configured": true, 00:10:01.235 "data_offset": 0, 00:10:01.235 "data_size": 65536 00:10:01.235 }, 00:10:01.235 { 00:10:01.235 "name": "BaseBdev3", 00:10:01.235 "uuid": "cfa047f6-6cfa-498e-9801-8559591bac5b", 00:10:01.235 "is_configured": true, 00:10:01.235 "data_offset": 0, 00:10:01.235 "data_size": 65536 00:10:01.235 } 00:10:01.235 ] 00:10:01.235 }' 00:10:01.235 19:08:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:01.235 19:08:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.804 19:08:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:01.804 19:08:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:01.804 19:08:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:01.804 19:08:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.804 19:08:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.804 19:08:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:01.804 19:08:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.804 19:08:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:01.804 19:08:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:01.804 19:08:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:01.804 19:08:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.804 19:08:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.804 [2024-11-27 19:08:11.214003] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:01.804 19:08:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.804 19:08:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:01.804 19:08:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:01.804 19:08:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:01.804 19:08:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:01.804 19:08:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.804 19:08:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.804 19:08:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.804 19:08:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:01.804 19:08:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:01.804 19:08:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:01.804 19:08:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.804 19:08:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.804 [2024-11-27 19:08:11.374801] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:01.804 [2024-11-27 19:08:11.374918] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:02.063 19:08:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.063 19:08:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:02.063 19:08:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:02.063 19:08:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.063 19:08:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.063 19:08:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:02.063 19:08:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.063 19:08:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.063 19:08:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:02.063 19:08:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:02.063 19:08:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:10:02.063 19:08:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:02.063 19:08:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:02.063 19:08:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:02.063 19:08:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.063 19:08:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.063 BaseBdev2 00:10:02.063 19:08:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.063 19:08:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:02.063 19:08:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:02.063 19:08:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:02.063 19:08:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:02.063 19:08:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:02.063 19:08:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:02.063 19:08:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:02.063 19:08:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.063 19:08:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.063 19:08:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.063 19:08:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:02.063 19:08:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.063 19:08:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.063 [ 00:10:02.063 { 00:10:02.063 "name": "BaseBdev2", 00:10:02.063 "aliases": [ 00:10:02.063 "084bc854-2b1a-47ac-ac37-355587508d05" 00:10:02.063 ], 00:10:02.063 "product_name": "Malloc disk", 00:10:02.063 "block_size": 512, 00:10:02.063 "num_blocks": 65536, 00:10:02.063 "uuid": "084bc854-2b1a-47ac-ac37-355587508d05", 00:10:02.063 "assigned_rate_limits": { 00:10:02.063 "rw_ios_per_sec": 0, 00:10:02.063 "rw_mbytes_per_sec": 0, 00:10:02.063 "r_mbytes_per_sec": 0, 00:10:02.063 "w_mbytes_per_sec": 0 00:10:02.063 }, 00:10:02.063 "claimed": false, 00:10:02.063 "zoned": false, 00:10:02.063 "supported_io_types": { 00:10:02.063 "read": true, 00:10:02.063 "write": true, 00:10:02.063 "unmap": true, 00:10:02.063 "flush": true, 00:10:02.063 "reset": true, 00:10:02.063 "nvme_admin": false, 00:10:02.063 "nvme_io": false, 00:10:02.063 "nvme_io_md": false, 00:10:02.063 "write_zeroes": true, 00:10:02.063 "zcopy": true, 00:10:02.063 "get_zone_info": false, 00:10:02.063 "zone_management": false, 00:10:02.063 "zone_append": false, 00:10:02.063 "compare": false, 00:10:02.063 "compare_and_write": false, 00:10:02.064 "abort": true, 00:10:02.064 "seek_hole": false, 00:10:02.064 "seek_data": false, 00:10:02.064 "copy": true, 00:10:02.064 "nvme_iov_md": false 00:10:02.064 }, 00:10:02.064 "memory_domains": [ 00:10:02.064 { 00:10:02.064 "dma_device_id": "system", 00:10:02.064 "dma_device_type": 1 00:10:02.064 }, 00:10:02.064 { 00:10:02.064 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:02.064 "dma_device_type": 2 00:10:02.064 } 00:10:02.064 ], 00:10:02.064 "driver_specific": {} 00:10:02.064 } 00:10:02.064 ] 00:10:02.064 19:08:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.064 19:08:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:02.064 19:08:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:02.064 19:08:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:02.064 19:08:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:02.064 19:08:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.064 19:08:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.064 BaseBdev3 00:10:02.064 19:08:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.064 19:08:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:02.064 19:08:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:02.064 19:08:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:02.064 19:08:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:02.064 19:08:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:02.064 19:08:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:02.064 19:08:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:02.064 19:08:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.064 19:08:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.064 19:08:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.064 19:08:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:02.064 19:08:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.064 19:08:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.064 [ 00:10:02.064 { 00:10:02.064 "name": "BaseBdev3", 00:10:02.064 "aliases": [ 00:10:02.064 "508507f4-6522-4bd8-b427-4f0008318c09" 00:10:02.064 ], 00:10:02.064 "product_name": "Malloc disk", 00:10:02.064 "block_size": 512, 00:10:02.064 "num_blocks": 65536, 00:10:02.064 "uuid": "508507f4-6522-4bd8-b427-4f0008318c09", 00:10:02.064 "assigned_rate_limits": { 00:10:02.064 "rw_ios_per_sec": 0, 00:10:02.064 "rw_mbytes_per_sec": 0, 00:10:02.064 "r_mbytes_per_sec": 0, 00:10:02.064 "w_mbytes_per_sec": 0 00:10:02.064 }, 00:10:02.064 "claimed": false, 00:10:02.064 "zoned": false, 00:10:02.064 "supported_io_types": { 00:10:02.064 "read": true, 00:10:02.064 "write": true, 00:10:02.064 "unmap": true, 00:10:02.064 "flush": true, 00:10:02.064 "reset": true, 00:10:02.064 "nvme_admin": false, 00:10:02.064 "nvme_io": false, 00:10:02.064 "nvme_io_md": false, 00:10:02.064 "write_zeroes": true, 00:10:02.064 "zcopy": true, 00:10:02.064 "get_zone_info": false, 00:10:02.064 "zone_management": false, 00:10:02.064 "zone_append": false, 00:10:02.064 "compare": false, 00:10:02.064 "compare_and_write": false, 00:10:02.064 "abort": true, 00:10:02.064 "seek_hole": false, 00:10:02.064 "seek_data": false, 00:10:02.064 "copy": true, 00:10:02.064 "nvme_iov_md": false 00:10:02.342 }, 00:10:02.342 "memory_domains": [ 00:10:02.342 { 00:10:02.342 "dma_device_id": "system", 00:10:02.342 "dma_device_type": 1 00:10:02.342 }, 00:10:02.342 { 00:10:02.342 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:02.342 "dma_device_type": 2 00:10:02.342 } 00:10:02.342 ], 00:10:02.342 "driver_specific": {} 00:10:02.342 } 00:10:02.342 ] 00:10:02.342 19:08:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.342 19:08:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:02.342 19:08:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:02.342 19:08:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:02.342 19:08:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:02.342 19:08:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.342 19:08:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.342 [2024-11-27 19:08:11.707270] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:02.342 [2024-11-27 19:08:11.707365] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:02.342 [2024-11-27 19:08:11.707412] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:02.342 [2024-11-27 19:08:11.709525] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:02.342 19:08:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.342 19:08:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:02.342 19:08:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:02.342 19:08:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:02.342 19:08:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:02.342 19:08:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:02.342 19:08:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:02.342 19:08:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:02.343 19:08:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:02.343 19:08:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:02.343 19:08:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:02.343 19:08:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.343 19:08:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.343 19:08:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:02.343 19:08:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.343 19:08:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.343 19:08:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:02.343 "name": "Existed_Raid", 00:10:02.343 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:02.343 "strip_size_kb": 64, 00:10:02.343 "state": "configuring", 00:10:02.343 "raid_level": "concat", 00:10:02.343 "superblock": false, 00:10:02.343 "num_base_bdevs": 3, 00:10:02.343 "num_base_bdevs_discovered": 2, 00:10:02.343 "num_base_bdevs_operational": 3, 00:10:02.343 "base_bdevs_list": [ 00:10:02.343 { 00:10:02.343 "name": "BaseBdev1", 00:10:02.343 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:02.343 "is_configured": false, 00:10:02.343 "data_offset": 0, 00:10:02.343 "data_size": 0 00:10:02.343 }, 00:10:02.343 { 00:10:02.343 "name": "BaseBdev2", 00:10:02.343 "uuid": "084bc854-2b1a-47ac-ac37-355587508d05", 00:10:02.343 "is_configured": true, 00:10:02.343 "data_offset": 0, 00:10:02.343 "data_size": 65536 00:10:02.343 }, 00:10:02.343 { 00:10:02.343 "name": "BaseBdev3", 00:10:02.343 "uuid": "508507f4-6522-4bd8-b427-4f0008318c09", 00:10:02.343 "is_configured": true, 00:10:02.343 "data_offset": 0, 00:10:02.343 "data_size": 65536 00:10:02.343 } 00:10:02.343 ] 00:10:02.343 }' 00:10:02.343 19:08:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:02.343 19:08:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.602 19:08:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:02.602 19:08:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.602 19:08:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.602 [2024-11-27 19:08:12.190543] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:02.602 19:08:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.602 19:08:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:02.602 19:08:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:02.602 19:08:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:02.602 19:08:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:02.602 19:08:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:02.602 19:08:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:02.602 19:08:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:02.602 19:08:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:02.602 19:08:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:02.602 19:08:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:02.602 19:08:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.602 19:08:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:02.602 19:08:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.602 19:08:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.602 19:08:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.862 19:08:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:02.862 "name": "Existed_Raid", 00:10:02.862 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:02.862 "strip_size_kb": 64, 00:10:02.862 "state": "configuring", 00:10:02.862 "raid_level": "concat", 00:10:02.862 "superblock": false, 00:10:02.862 "num_base_bdevs": 3, 00:10:02.862 "num_base_bdevs_discovered": 1, 00:10:02.862 "num_base_bdevs_operational": 3, 00:10:02.862 "base_bdevs_list": [ 00:10:02.862 { 00:10:02.862 "name": "BaseBdev1", 00:10:02.862 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:02.862 "is_configured": false, 00:10:02.862 "data_offset": 0, 00:10:02.862 "data_size": 0 00:10:02.862 }, 00:10:02.862 { 00:10:02.862 "name": null, 00:10:02.862 "uuid": "084bc854-2b1a-47ac-ac37-355587508d05", 00:10:02.862 "is_configured": false, 00:10:02.862 "data_offset": 0, 00:10:02.862 "data_size": 65536 00:10:02.862 }, 00:10:02.862 { 00:10:02.862 "name": "BaseBdev3", 00:10:02.862 "uuid": "508507f4-6522-4bd8-b427-4f0008318c09", 00:10:02.862 "is_configured": true, 00:10:02.862 "data_offset": 0, 00:10:02.862 "data_size": 65536 00:10:02.862 } 00:10:02.862 ] 00:10:02.862 }' 00:10:02.862 19:08:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:02.862 19:08:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.121 19:08:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:03.121 19:08:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.121 19:08:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.121 19:08:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.121 19:08:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.121 19:08:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:03.121 19:08:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:03.121 19:08:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.121 19:08:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.121 [2024-11-27 19:08:12.708165] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:03.121 BaseBdev1 00:10:03.121 19:08:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.121 19:08:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:03.121 19:08:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:03.122 19:08:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:03.122 19:08:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:03.122 19:08:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:03.122 19:08:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:03.122 19:08:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:03.122 19:08:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.122 19:08:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.122 19:08:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.122 19:08:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:03.122 19:08:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.122 19:08:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.122 [ 00:10:03.122 { 00:10:03.122 "name": "BaseBdev1", 00:10:03.122 "aliases": [ 00:10:03.122 "3e229b8b-8b0a-441b-896f-dc3bae2a8379" 00:10:03.122 ], 00:10:03.122 "product_name": "Malloc disk", 00:10:03.122 "block_size": 512, 00:10:03.122 "num_blocks": 65536, 00:10:03.122 "uuid": "3e229b8b-8b0a-441b-896f-dc3bae2a8379", 00:10:03.122 "assigned_rate_limits": { 00:10:03.122 "rw_ios_per_sec": 0, 00:10:03.122 "rw_mbytes_per_sec": 0, 00:10:03.122 "r_mbytes_per_sec": 0, 00:10:03.122 "w_mbytes_per_sec": 0 00:10:03.122 }, 00:10:03.122 "claimed": true, 00:10:03.122 "claim_type": "exclusive_write", 00:10:03.122 "zoned": false, 00:10:03.122 "supported_io_types": { 00:10:03.122 "read": true, 00:10:03.122 "write": true, 00:10:03.122 "unmap": true, 00:10:03.122 "flush": true, 00:10:03.122 "reset": true, 00:10:03.122 "nvme_admin": false, 00:10:03.122 "nvme_io": false, 00:10:03.122 "nvme_io_md": false, 00:10:03.122 "write_zeroes": true, 00:10:03.122 "zcopy": true, 00:10:03.122 "get_zone_info": false, 00:10:03.122 "zone_management": false, 00:10:03.122 "zone_append": false, 00:10:03.122 "compare": false, 00:10:03.122 "compare_and_write": false, 00:10:03.122 "abort": true, 00:10:03.122 "seek_hole": false, 00:10:03.122 "seek_data": false, 00:10:03.122 "copy": true, 00:10:03.122 "nvme_iov_md": false 00:10:03.122 }, 00:10:03.122 "memory_domains": [ 00:10:03.122 { 00:10:03.122 "dma_device_id": "system", 00:10:03.122 "dma_device_type": 1 00:10:03.122 }, 00:10:03.122 { 00:10:03.122 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:03.122 "dma_device_type": 2 00:10:03.122 } 00:10:03.122 ], 00:10:03.122 "driver_specific": {} 00:10:03.122 } 00:10:03.122 ] 00:10:03.122 19:08:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.122 19:08:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:03.122 19:08:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:03.122 19:08:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:03.122 19:08:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:03.122 19:08:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:03.122 19:08:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:03.122 19:08:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:03.122 19:08:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:03.122 19:08:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:03.122 19:08:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:03.122 19:08:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:03.122 19:08:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.122 19:08:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:03.122 19:08:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.122 19:08:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.381 19:08:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.381 19:08:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:03.381 "name": "Existed_Raid", 00:10:03.381 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:03.381 "strip_size_kb": 64, 00:10:03.381 "state": "configuring", 00:10:03.381 "raid_level": "concat", 00:10:03.381 "superblock": false, 00:10:03.381 "num_base_bdevs": 3, 00:10:03.381 "num_base_bdevs_discovered": 2, 00:10:03.381 "num_base_bdevs_operational": 3, 00:10:03.381 "base_bdevs_list": [ 00:10:03.381 { 00:10:03.381 "name": "BaseBdev1", 00:10:03.381 "uuid": "3e229b8b-8b0a-441b-896f-dc3bae2a8379", 00:10:03.381 "is_configured": true, 00:10:03.381 "data_offset": 0, 00:10:03.381 "data_size": 65536 00:10:03.381 }, 00:10:03.381 { 00:10:03.381 "name": null, 00:10:03.381 "uuid": "084bc854-2b1a-47ac-ac37-355587508d05", 00:10:03.381 "is_configured": false, 00:10:03.381 "data_offset": 0, 00:10:03.381 "data_size": 65536 00:10:03.381 }, 00:10:03.381 { 00:10:03.381 "name": "BaseBdev3", 00:10:03.381 "uuid": "508507f4-6522-4bd8-b427-4f0008318c09", 00:10:03.381 "is_configured": true, 00:10:03.381 "data_offset": 0, 00:10:03.381 "data_size": 65536 00:10:03.381 } 00:10:03.381 ] 00:10:03.381 }' 00:10:03.381 19:08:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:03.381 19:08:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.640 19:08:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:03.640 19:08:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.640 19:08:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.640 19:08:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.640 19:08:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.640 19:08:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:03.640 19:08:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:03.640 19:08:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.640 19:08:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.640 [2024-11-27 19:08:13.231313] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:03.640 19:08:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.640 19:08:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:03.640 19:08:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:03.640 19:08:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:03.640 19:08:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:03.640 19:08:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:03.640 19:08:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:03.640 19:08:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:03.640 19:08:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:03.640 19:08:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:03.640 19:08:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:03.640 19:08:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:03.640 19:08:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.640 19:08:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.640 19:08:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.640 19:08:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.640 19:08:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:03.640 "name": "Existed_Raid", 00:10:03.640 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:03.640 "strip_size_kb": 64, 00:10:03.640 "state": "configuring", 00:10:03.640 "raid_level": "concat", 00:10:03.640 "superblock": false, 00:10:03.640 "num_base_bdevs": 3, 00:10:03.640 "num_base_bdevs_discovered": 1, 00:10:03.640 "num_base_bdevs_operational": 3, 00:10:03.640 "base_bdevs_list": [ 00:10:03.640 { 00:10:03.640 "name": "BaseBdev1", 00:10:03.640 "uuid": "3e229b8b-8b0a-441b-896f-dc3bae2a8379", 00:10:03.640 "is_configured": true, 00:10:03.640 "data_offset": 0, 00:10:03.640 "data_size": 65536 00:10:03.640 }, 00:10:03.640 { 00:10:03.640 "name": null, 00:10:03.640 "uuid": "084bc854-2b1a-47ac-ac37-355587508d05", 00:10:03.640 "is_configured": false, 00:10:03.640 "data_offset": 0, 00:10:03.640 "data_size": 65536 00:10:03.640 }, 00:10:03.640 { 00:10:03.640 "name": null, 00:10:03.640 "uuid": "508507f4-6522-4bd8-b427-4f0008318c09", 00:10:03.640 "is_configured": false, 00:10:03.640 "data_offset": 0, 00:10:03.640 "data_size": 65536 00:10:03.640 } 00:10:03.640 ] 00:10:03.640 }' 00:10:03.640 19:08:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:03.640 19:08:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.207 19:08:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:04.207 19:08:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:04.207 19:08:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.207 19:08:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.207 19:08:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.207 19:08:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:04.207 19:08:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:04.207 19:08:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.207 19:08:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.207 [2024-11-27 19:08:13.654674] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:04.207 19:08:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.207 19:08:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:04.207 19:08:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:04.207 19:08:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:04.207 19:08:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:04.207 19:08:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:04.207 19:08:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:04.207 19:08:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:04.207 19:08:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:04.207 19:08:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:04.207 19:08:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:04.207 19:08:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:04.207 19:08:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:04.207 19:08:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.207 19:08:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.207 19:08:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.207 19:08:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:04.207 "name": "Existed_Raid", 00:10:04.207 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:04.207 "strip_size_kb": 64, 00:10:04.207 "state": "configuring", 00:10:04.207 "raid_level": "concat", 00:10:04.207 "superblock": false, 00:10:04.207 "num_base_bdevs": 3, 00:10:04.207 "num_base_bdevs_discovered": 2, 00:10:04.207 "num_base_bdevs_operational": 3, 00:10:04.207 "base_bdevs_list": [ 00:10:04.207 { 00:10:04.207 "name": "BaseBdev1", 00:10:04.207 "uuid": "3e229b8b-8b0a-441b-896f-dc3bae2a8379", 00:10:04.207 "is_configured": true, 00:10:04.207 "data_offset": 0, 00:10:04.207 "data_size": 65536 00:10:04.207 }, 00:10:04.207 { 00:10:04.207 "name": null, 00:10:04.207 "uuid": "084bc854-2b1a-47ac-ac37-355587508d05", 00:10:04.207 "is_configured": false, 00:10:04.207 "data_offset": 0, 00:10:04.207 "data_size": 65536 00:10:04.207 }, 00:10:04.207 { 00:10:04.207 "name": "BaseBdev3", 00:10:04.207 "uuid": "508507f4-6522-4bd8-b427-4f0008318c09", 00:10:04.207 "is_configured": true, 00:10:04.207 "data_offset": 0, 00:10:04.207 "data_size": 65536 00:10:04.207 } 00:10:04.207 ] 00:10:04.207 }' 00:10:04.207 19:08:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:04.207 19:08:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.776 19:08:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:04.776 19:08:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:04.776 19:08:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.776 19:08:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.776 19:08:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.776 19:08:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:04.776 19:08:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:04.776 19:08:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.776 19:08:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.776 [2024-11-27 19:08:14.153869] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:04.776 19:08:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.776 19:08:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:04.776 19:08:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:04.776 19:08:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:04.776 19:08:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:04.776 19:08:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:04.776 19:08:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:04.776 19:08:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:04.776 19:08:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:04.776 19:08:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:04.776 19:08:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:04.776 19:08:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:04.776 19:08:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.776 19:08:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:04.776 19:08:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.776 19:08:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.776 19:08:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:04.776 "name": "Existed_Raid", 00:10:04.776 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:04.776 "strip_size_kb": 64, 00:10:04.776 "state": "configuring", 00:10:04.776 "raid_level": "concat", 00:10:04.776 "superblock": false, 00:10:04.776 "num_base_bdevs": 3, 00:10:04.776 "num_base_bdevs_discovered": 1, 00:10:04.776 "num_base_bdevs_operational": 3, 00:10:04.776 "base_bdevs_list": [ 00:10:04.776 { 00:10:04.776 "name": null, 00:10:04.776 "uuid": "3e229b8b-8b0a-441b-896f-dc3bae2a8379", 00:10:04.776 "is_configured": false, 00:10:04.776 "data_offset": 0, 00:10:04.776 "data_size": 65536 00:10:04.776 }, 00:10:04.776 { 00:10:04.776 "name": null, 00:10:04.776 "uuid": "084bc854-2b1a-47ac-ac37-355587508d05", 00:10:04.776 "is_configured": false, 00:10:04.776 "data_offset": 0, 00:10:04.776 "data_size": 65536 00:10:04.776 }, 00:10:04.776 { 00:10:04.776 "name": "BaseBdev3", 00:10:04.776 "uuid": "508507f4-6522-4bd8-b427-4f0008318c09", 00:10:04.776 "is_configured": true, 00:10:04.776 "data_offset": 0, 00:10:04.776 "data_size": 65536 00:10:04.776 } 00:10:04.776 ] 00:10:04.776 }' 00:10:04.776 19:08:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:04.776 19:08:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.346 19:08:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.346 19:08:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.346 19:08:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.346 19:08:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:05.346 19:08:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.346 19:08:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:05.346 19:08:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:05.346 19:08:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.346 19:08:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.346 [2024-11-27 19:08:14.719140] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:05.346 19:08:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.346 19:08:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:05.346 19:08:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:05.346 19:08:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:05.346 19:08:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:05.346 19:08:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:05.346 19:08:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:05.346 19:08:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:05.346 19:08:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:05.346 19:08:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:05.346 19:08:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:05.346 19:08:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.346 19:08:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.346 19:08:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:05.346 19:08:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.346 19:08:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.346 19:08:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:05.346 "name": "Existed_Raid", 00:10:05.346 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:05.346 "strip_size_kb": 64, 00:10:05.346 "state": "configuring", 00:10:05.346 "raid_level": "concat", 00:10:05.346 "superblock": false, 00:10:05.346 "num_base_bdevs": 3, 00:10:05.346 "num_base_bdevs_discovered": 2, 00:10:05.346 "num_base_bdevs_operational": 3, 00:10:05.346 "base_bdevs_list": [ 00:10:05.346 { 00:10:05.346 "name": null, 00:10:05.346 "uuid": "3e229b8b-8b0a-441b-896f-dc3bae2a8379", 00:10:05.346 "is_configured": false, 00:10:05.346 "data_offset": 0, 00:10:05.346 "data_size": 65536 00:10:05.346 }, 00:10:05.346 { 00:10:05.346 "name": "BaseBdev2", 00:10:05.346 "uuid": "084bc854-2b1a-47ac-ac37-355587508d05", 00:10:05.346 "is_configured": true, 00:10:05.346 "data_offset": 0, 00:10:05.346 "data_size": 65536 00:10:05.346 }, 00:10:05.346 { 00:10:05.346 "name": "BaseBdev3", 00:10:05.346 "uuid": "508507f4-6522-4bd8-b427-4f0008318c09", 00:10:05.346 "is_configured": true, 00:10:05.346 "data_offset": 0, 00:10:05.346 "data_size": 65536 00:10:05.346 } 00:10:05.346 ] 00:10:05.346 }' 00:10:05.346 19:08:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:05.346 19:08:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.606 19:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.606 19:08:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.606 19:08:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.606 19:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:05.606 19:08:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.606 19:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:05.606 19:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.606 19:08:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.606 19:08:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.606 19:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:05.606 19:08:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.866 19:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 3e229b8b-8b0a-441b-896f-dc3bae2a8379 00:10:05.866 19:08:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.866 19:08:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.866 [2024-11-27 19:08:15.305959] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:05.866 [2024-11-27 19:08:15.306128] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:05.866 [2024-11-27 19:08:15.306158] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:10:05.866 [2024-11-27 19:08:15.306489] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:05.866 [2024-11-27 19:08:15.306732] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:05.866 [2024-11-27 19:08:15.306778] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:05.866 [2024-11-27 19:08:15.307142] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:05.866 NewBaseBdev 00:10:05.866 19:08:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.866 19:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:05.866 19:08:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:10:05.866 19:08:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:05.866 19:08:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:05.866 19:08:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:05.866 19:08:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:05.866 19:08:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:05.866 19:08:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.866 19:08:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.866 19:08:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.866 19:08:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:05.866 19:08:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.866 19:08:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.866 [ 00:10:05.866 { 00:10:05.866 "name": "NewBaseBdev", 00:10:05.866 "aliases": [ 00:10:05.866 "3e229b8b-8b0a-441b-896f-dc3bae2a8379" 00:10:05.866 ], 00:10:05.866 "product_name": "Malloc disk", 00:10:05.866 "block_size": 512, 00:10:05.866 "num_blocks": 65536, 00:10:05.866 "uuid": "3e229b8b-8b0a-441b-896f-dc3bae2a8379", 00:10:05.866 "assigned_rate_limits": { 00:10:05.866 "rw_ios_per_sec": 0, 00:10:05.866 "rw_mbytes_per_sec": 0, 00:10:05.866 "r_mbytes_per_sec": 0, 00:10:05.866 "w_mbytes_per_sec": 0 00:10:05.866 }, 00:10:05.866 "claimed": true, 00:10:05.866 "claim_type": "exclusive_write", 00:10:05.866 "zoned": false, 00:10:05.866 "supported_io_types": { 00:10:05.866 "read": true, 00:10:05.866 "write": true, 00:10:05.866 "unmap": true, 00:10:05.866 "flush": true, 00:10:05.866 "reset": true, 00:10:05.866 "nvme_admin": false, 00:10:05.866 "nvme_io": false, 00:10:05.866 "nvme_io_md": false, 00:10:05.866 "write_zeroes": true, 00:10:05.866 "zcopy": true, 00:10:05.866 "get_zone_info": false, 00:10:05.866 "zone_management": false, 00:10:05.866 "zone_append": false, 00:10:05.866 "compare": false, 00:10:05.866 "compare_and_write": false, 00:10:05.866 "abort": true, 00:10:05.866 "seek_hole": false, 00:10:05.866 "seek_data": false, 00:10:05.866 "copy": true, 00:10:05.866 "nvme_iov_md": false 00:10:05.866 }, 00:10:05.866 "memory_domains": [ 00:10:05.866 { 00:10:05.866 "dma_device_id": "system", 00:10:05.866 "dma_device_type": 1 00:10:05.866 }, 00:10:05.866 { 00:10:05.866 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:05.866 "dma_device_type": 2 00:10:05.866 } 00:10:05.866 ], 00:10:05.866 "driver_specific": {} 00:10:05.866 } 00:10:05.866 ] 00:10:05.866 19:08:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.866 19:08:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:05.866 19:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:10:05.866 19:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:05.866 19:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:05.866 19:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:05.866 19:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:05.866 19:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:05.866 19:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:05.866 19:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:05.866 19:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:05.866 19:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:05.866 19:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.866 19:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:05.866 19:08:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.866 19:08:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.866 19:08:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.866 19:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:05.866 "name": "Existed_Raid", 00:10:05.866 "uuid": "734397c2-0291-41c8-9d2f-3225a2444d83", 00:10:05.866 "strip_size_kb": 64, 00:10:05.866 "state": "online", 00:10:05.866 "raid_level": "concat", 00:10:05.866 "superblock": false, 00:10:05.866 "num_base_bdevs": 3, 00:10:05.866 "num_base_bdevs_discovered": 3, 00:10:05.866 "num_base_bdevs_operational": 3, 00:10:05.866 "base_bdevs_list": [ 00:10:05.866 { 00:10:05.866 "name": "NewBaseBdev", 00:10:05.866 "uuid": "3e229b8b-8b0a-441b-896f-dc3bae2a8379", 00:10:05.866 "is_configured": true, 00:10:05.866 "data_offset": 0, 00:10:05.866 "data_size": 65536 00:10:05.866 }, 00:10:05.866 { 00:10:05.866 "name": "BaseBdev2", 00:10:05.866 "uuid": "084bc854-2b1a-47ac-ac37-355587508d05", 00:10:05.866 "is_configured": true, 00:10:05.866 "data_offset": 0, 00:10:05.866 "data_size": 65536 00:10:05.866 }, 00:10:05.866 { 00:10:05.866 "name": "BaseBdev3", 00:10:05.866 "uuid": "508507f4-6522-4bd8-b427-4f0008318c09", 00:10:05.866 "is_configured": true, 00:10:05.866 "data_offset": 0, 00:10:05.866 "data_size": 65536 00:10:05.866 } 00:10:05.866 ] 00:10:05.866 }' 00:10:05.866 19:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:05.866 19:08:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.126 19:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:06.126 19:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:06.126 19:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:06.126 19:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:06.126 19:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:06.126 19:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:06.126 19:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:06.126 19:08:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.126 19:08:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.126 19:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:06.126 [2024-11-27 19:08:15.717668] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:06.126 19:08:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.126 19:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:06.126 "name": "Existed_Raid", 00:10:06.126 "aliases": [ 00:10:06.126 "734397c2-0291-41c8-9d2f-3225a2444d83" 00:10:06.126 ], 00:10:06.126 "product_name": "Raid Volume", 00:10:06.126 "block_size": 512, 00:10:06.126 "num_blocks": 196608, 00:10:06.126 "uuid": "734397c2-0291-41c8-9d2f-3225a2444d83", 00:10:06.126 "assigned_rate_limits": { 00:10:06.126 "rw_ios_per_sec": 0, 00:10:06.126 "rw_mbytes_per_sec": 0, 00:10:06.126 "r_mbytes_per_sec": 0, 00:10:06.126 "w_mbytes_per_sec": 0 00:10:06.126 }, 00:10:06.126 "claimed": false, 00:10:06.126 "zoned": false, 00:10:06.126 "supported_io_types": { 00:10:06.126 "read": true, 00:10:06.126 "write": true, 00:10:06.126 "unmap": true, 00:10:06.126 "flush": true, 00:10:06.126 "reset": true, 00:10:06.126 "nvme_admin": false, 00:10:06.126 "nvme_io": false, 00:10:06.126 "nvme_io_md": false, 00:10:06.126 "write_zeroes": true, 00:10:06.126 "zcopy": false, 00:10:06.126 "get_zone_info": false, 00:10:06.126 "zone_management": false, 00:10:06.126 "zone_append": false, 00:10:06.126 "compare": false, 00:10:06.126 "compare_and_write": false, 00:10:06.126 "abort": false, 00:10:06.126 "seek_hole": false, 00:10:06.126 "seek_data": false, 00:10:06.126 "copy": false, 00:10:06.126 "nvme_iov_md": false 00:10:06.126 }, 00:10:06.126 "memory_domains": [ 00:10:06.126 { 00:10:06.126 "dma_device_id": "system", 00:10:06.126 "dma_device_type": 1 00:10:06.126 }, 00:10:06.126 { 00:10:06.126 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:06.126 "dma_device_type": 2 00:10:06.126 }, 00:10:06.126 { 00:10:06.126 "dma_device_id": "system", 00:10:06.126 "dma_device_type": 1 00:10:06.126 }, 00:10:06.126 { 00:10:06.126 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:06.126 "dma_device_type": 2 00:10:06.126 }, 00:10:06.126 { 00:10:06.126 "dma_device_id": "system", 00:10:06.126 "dma_device_type": 1 00:10:06.126 }, 00:10:06.126 { 00:10:06.126 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:06.126 "dma_device_type": 2 00:10:06.126 } 00:10:06.126 ], 00:10:06.126 "driver_specific": { 00:10:06.126 "raid": { 00:10:06.126 "uuid": "734397c2-0291-41c8-9d2f-3225a2444d83", 00:10:06.126 "strip_size_kb": 64, 00:10:06.126 "state": "online", 00:10:06.126 "raid_level": "concat", 00:10:06.126 "superblock": false, 00:10:06.126 "num_base_bdevs": 3, 00:10:06.126 "num_base_bdevs_discovered": 3, 00:10:06.126 "num_base_bdevs_operational": 3, 00:10:06.126 "base_bdevs_list": [ 00:10:06.126 { 00:10:06.126 "name": "NewBaseBdev", 00:10:06.126 "uuid": "3e229b8b-8b0a-441b-896f-dc3bae2a8379", 00:10:06.126 "is_configured": true, 00:10:06.126 "data_offset": 0, 00:10:06.126 "data_size": 65536 00:10:06.126 }, 00:10:06.126 { 00:10:06.126 "name": "BaseBdev2", 00:10:06.126 "uuid": "084bc854-2b1a-47ac-ac37-355587508d05", 00:10:06.126 "is_configured": true, 00:10:06.126 "data_offset": 0, 00:10:06.126 "data_size": 65536 00:10:06.126 }, 00:10:06.126 { 00:10:06.126 "name": "BaseBdev3", 00:10:06.126 "uuid": "508507f4-6522-4bd8-b427-4f0008318c09", 00:10:06.126 "is_configured": true, 00:10:06.126 "data_offset": 0, 00:10:06.126 "data_size": 65536 00:10:06.126 } 00:10:06.126 ] 00:10:06.126 } 00:10:06.126 } 00:10:06.126 }' 00:10:06.126 19:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:06.387 19:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:06.387 BaseBdev2 00:10:06.387 BaseBdev3' 00:10:06.387 19:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:06.387 19:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:06.387 19:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:06.387 19:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:06.387 19:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:06.387 19:08:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.387 19:08:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.387 19:08:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.387 19:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:06.387 19:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:06.387 19:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:06.387 19:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:06.387 19:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:06.387 19:08:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.387 19:08:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.387 19:08:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.387 19:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:06.387 19:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:06.387 19:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:06.387 19:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:06.387 19:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:06.387 19:08:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.387 19:08:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.387 19:08:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.387 19:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:06.387 19:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:06.387 19:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:06.387 19:08:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.387 19:08:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.387 [2024-11-27 19:08:15.992803] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:06.387 [2024-11-27 19:08:15.992874] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:06.387 [2024-11-27 19:08:15.992993] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:06.387 [2024-11-27 19:08:15.993071] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:06.387 [2024-11-27 19:08:15.993163] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:06.387 19:08:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.387 19:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 65705 00:10:06.387 19:08:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 65705 ']' 00:10:06.387 19:08:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 65705 00:10:06.387 19:08:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:10:06.387 19:08:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:06.387 19:08:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65705 00:10:06.647 19:08:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:06.647 19:08:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:06.647 19:08:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65705' 00:10:06.647 killing process with pid 65705 00:10:06.647 19:08:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 65705 00:10:06.647 [2024-11-27 19:08:16.028906] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:06.647 19:08:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 65705 00:10:06.906 [2024-11-27 19:08:16.354236] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:08.287 19:08:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:10:08.287 00:10:08.287 real 0m10.685s 00:10:08.287 user 0m16.624s 00:10:08.287 sys 0m2.044s 00:10:08.287 ************************************ 00:10:08.287 END TEST raid_state_function_test 00:10:08.287 ************************************ 00:10:08.287 19:08:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:08.287 19:08:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.287 19:08:17 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:10:08.287 19:08:17 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:08.287 19:08:17 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:08.287 19:08:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:08.287 ************************************ 00:10:08.287 START TEST raid_state_function_test_sb 00:10:08.287 ************************************ 00:10:08.287 19:08:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 3 true 00:10:08.287 19:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:10:08.287 19:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:10:08.287 19:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:10:08.287 19:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:08.287 19:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:08.287 19:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:08.287 19:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:08.287 19:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:08.287 19:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:08.287 19:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:08.287 19:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:08.287 19:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:08.287 19:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:08.287 19:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:08.287 19:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:08.287 19:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:08.287 19:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:08.287 19:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:08.287 19:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:08.287 19:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:08.287 19:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:08.287 19:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:10:08.287 19:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:08.287 19:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:08.287 19:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:10:08.287 19:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:10:08.287 19:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=66326 00:10:08.287 19:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:08.287 19:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 66326' 00:10:08.287 Process raid pid: 66326 00:10:08.287 19:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 66326 00:10:08.287 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:08.287 19:08:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 66326 ']' 00:10:08.287 19:08:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:08.287 19:08:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:08.287 19:08:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:08.287 19:08:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:08.287 19:08:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.287 [2024-11-27 19:08:17.740168] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:10:08.287 [2024-11-27 19:08:17.740364] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:08.287 [2024-11-27 19:08:17.915574] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:08.547 [2024-11-27 19:08:18.057576] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:08.806 [2024-11-27 19:08:18.296750] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:08.806 [2024-11-27 19:08:18.296807] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:09.066 19:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:09.066 19:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:10:09.066 19:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:09.066 19:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.066 19:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.066 [2024-11-27 19:08:18.580884] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:09.066 [2024-11-27 19:08:18.581009] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:09.066 [2024-11-27 19:08:18.581052] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:09.066 [2024-11-27 19:08:18.581079] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:09.066 [2024-11-27 19:08:18.581128] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:09.066 [2024-11-27 19:08:18.581151] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:09.066 19:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.066 19:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:09.066 19:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:09.066 19:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:09.066 19:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:09.066 19:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:09.066 19:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:09.066 19:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:09.066 19:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:09.066 19:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:09.066 19:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:09.066 19:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.066 19:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:09.066 19:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.066 19:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.066 19:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.066 19:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:09.066 "name": "Existed_Raid", 00:10:09.066 "uuid": "896af272-f70e-4d0b-8890-abb963b40ed4", 00:10:09.066 "strip_size_kb": 64, 00:10:09.066 "state": "configuring", 00:10:09.066 "raid_level": "concat", 00:10:09.066 "superblock": true, 00:10:09.066 "num_base_bdevs": 3, 00:10:09.066 "num_base_bdevs_discovered": 0, 00:10:09.066 "num_base_bdevs_operational": 3, 00:10:09.066 "base_bdevs_list": [ 00:10:09.066 { 00:10:09.066 "name": "BaseBdev1", 00:10:09.066 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:09.066 "is_configured": false, 00:10:09.066 "data_offset": 0, 00:10:09.066 "data_size": 0 00:10:09.066 }, 00:10:09.066 { 00:10:09.066 "name": "BaseBdev2", 00:10:09.066 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:09.066 "is_configured": false, 00:10:09.066 "data_offset": 0, 00:10:09.066 "data_size": 0 00:10:09.066 }, 00:10:09.066 { 00:10:09.066 "name": "BaseBdev3", 00:10:09.066 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:09.066 "is_configured": false, 00:10:09.066 "data_offset": 0, 00:10:09.066 "data_size": 0 00:10:09.066 } 00:10:09.066 ] 00:10:09.066 }' 00:10:09.066 19:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:09.066 19:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.636 19:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:09.636 19:08:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.636 19:08:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.636 [2024-11-27 19:08:19.067952] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:09.636 [2024-11-27 19:08:19.068043] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:09.636 19:08:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.636 19:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:09.636 19:08:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.636 19:08:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.636 [2024-11-27 19:08:19.079913] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:09.636 [2024-11-27 19:08:19.079965] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:09.636 [2024-11-27 19:08:19.079976] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:09.636 [2024-11-27 19:08:19.079986] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:09.636 [2024-11-27 19:08:19.079992] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:09.636 [2024-11-27 19:08:19.080002] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:09.636 19:08:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.636 19:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:09.636 19:08:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.636 19:08:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.636 [2024-11-27 19:08:19.134996] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:09.636 BaseBdev1 00:10:09.636 19:08:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.636 19:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:09.636 19:08:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:09.636 19:08:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:09.636 19:08:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:09.636 19:08:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:09.636 19:08:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:09.636 19:08:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:09.636 19:08:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.636 19:08:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.636 19:08:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.636 19:08:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:09.636 19:08:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.636 19:08:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.636 [ 00:10:09.636 { 00:10:09.636 "name": "BaseBdev1", 00:10:09.636 "aliases": [ 00:10:09.636 "ea62c4ce-ba96-4d5b-a172-93a6c40d6a77" 00:10:09.636 ], 00:10:09.636 "product_name": "Malloc disk", 00:10:09.636 "block_size": 512, 00:10:09.636 "num_blocks": 65536, 00:10:09.636 "uuid": "ea62c4ce-ba96-4d5b-a172-93a6c40d6a77", 00:10:09.636 "assigned_rate_limits": { 00:10:09.636 "rw_ios_per_sec": 0, 00:10:09.636 "rw_mbytes_per_sec": 0, 00:10:09.636 "r_mbytes_per_sec": 0, 00:10:09.636 "w_mbytes_per_sec": 0 00:10:09.636 }, 00:10:09.636 "claimed": true, 00:10:09.636 "claim_type": "exclusive_write", 00:10:09.636 "zoned": false, 00:10:09.636 "supported_io_types": { 00:10:09.636 "read": true, 00:10:09.636 "write": true, 00:10:09.636 "unmap": true, 00:10:09.636 "flush": true, 00:10:09.636 "reset": true, 00:10:09.636 "nvme_admin": false, 00:10:09.636 "nvme_io": false, 00:10:09.636 "nvme_io_md": false, 00:10:09.636 "write_zeroes": true, 00:10:09.636 "zcopy": true, 00:10:09.636 "get_zone_info": false, 00:10:09.636 "zone_management": false, 00:10:09.636 "zone_append": false, 00:10:09.636 "compare": false, 00:10:09.636 "compare_and_write": false, 00:10:09.636 "abort": true, 00:10:09.636 "seek_hole": false, 00:10:09.636 "seek_data": false, 00:10:09.636 "copy": true, 00:10:09.636 "nvme_iov_md": false 00:10:09.636 }, 00:10:09.636 "memory_domains": [ 00:10:09.636 { 00:10:09.636 "dma_device_id": "system", 00:10:09.636 "dma_device_type": 1 00:10:09.636 }, 00:10:09.636 { 00:10:09.636 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:09.636 "dma_device_type": 2 00:10:09.636 } 00:10:09.636 ], 00:10:09.636 "driver_specific": {} 00:10:09.636 } 00:10:09.636 ] 00:10:09.636 19:08:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.636 19:08:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:09.636 19:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:09.636 19:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:09.636 19:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:09.637 19:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:09.637 19:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:09.637 19:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:09.637 19:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:09.637 19:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:09.637 19:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:09.637 19:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:09.637 19:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.637 19:08:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.637 19:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:09.637 19:08:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.637 19:08:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.637 19:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:09.637 "name": "Existed_Raid", 00:10:09.637 "uuid": "ea508dbd-8880-4aee-aaac-bd38286299b8", 00:10:09.637 "strip_size_kb": 64, 00:10:09.637 "state": "configuring", 00:10:09.637 "raid_level": "concat", 00:10:09.637 "superblock": true, 00:10:09.637 "num_base_bdevs": 3, 00:10:09.637 "num_base_bdevs_discovered": 1, 00:10:09.637 "num_base_bdevs_operational": 3, 00:10:09.637 "base_bdevs_list": [ 00:10:09.637 { 00:10:09.637 "name": "BaseBdev1", 00:10:09.637 "uuid": "ea62c4ce-ba96-4d5b-a172-93a6c40d6a77", 00:10:09.637 "is_configured": true, 00:10:09.637 "data_offset": 2048, 00:10:09.637 "data_size": 63488 00:10:09.637 }, 00:10:09.637 { 00:10:09.637 "name": "BaseBdev2", 00:10:09.637 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:09.637 "is_configured": false, 00:10:09.637 "data_offset": 0, 00:10:09.637 "data_size": 0 00:10:09.637 }, 00:10:09.637 { 00:10:09.637 "name": "BaseBdev3", 00:10:09.637 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:09.637 "is_configured": false, 00:10:09.637 "data_offset": 0, 00:10:09.637 "data_size": 0 00:10:09.637 } 00:10:09.637 ] 00:10:09.637 }' 00:10:09.637 19:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:09.637 19:08:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.260 19:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:10.260 19:08:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.260 19:08:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.260 [2024-11-27 19:08:19.642157] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:10.260 [2024-11-27 19:08:19.642266] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:10.260 19:08:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.260 19:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:10.260 19:08:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.260 19:08:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.260 [2024-11-27 19:08:19.654182] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:10.260 [2024-11-27 19:08:19.656451] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:10.260 [2024-11-27 19:08:19.656534] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:10.260 [2024-11-27 19:08:19.656563] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:10.260 [2024-11-27 19:08:19.656586] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:10.260 19:08:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.260 19:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:10.260 19:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:10.260 19:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:10.260 19:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:10.260 19:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:10.260 19:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:10.260 19:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:10.260 19:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:10.260 19:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:10.260 19:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:10.260 19:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:10.260 19:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:10.260 19:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:10.260 19:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:10.260 19:08:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.260 19:08:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.260 19:08:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.260 19:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:10.260 "name": "Existed_Raid", 00:10:10.260 "uuid": "31f12e19-4a6b-4649-9e34-ed30e0ebd0d9", 00:10:10.260 "strip_size_kb": 64, 00:10:10.260 "state": "configuring", 00:10:10.261 "raid_level": "concat", 00:10:10.261 "superblock": true, 00:10:10.261 "num_base_bdevs": 3, 00:10:10.261 "num_base_bdevs_discovered": 1, 00:10:10.261 "num_base_bdevs_operational": 3, 00:10:10.261 "base_bdevs_list": [ 00:10:10.261 { 00:10:10.261 "name": "BaseBdev1", 00:10:10.261 "uuid": "ea62c4ce-ba96-4d5b-a172-93a6c40d6a77", 00:10:10.261 "is_configured": true, 00:10:10.261 "data_offset": 2048, 00:10:10.261 "data_size": 63488 00:10:10.261 }, 00:10:10.261 { 00:10:10.261 "name": "BaseBdev2", 00:10:10.261 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:10.261 "is_configured": false, 00:10:10.261 "data_offset": 0, 00:10:10.261 "data_size": 0 00:10:10.261 }, 00:10:10.261 { 00:10:10.261 "name": "BaseBdev3", 00:10:10.261 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:10.261 "is_configured": false, 00:10:10.261 "data_offset": 0, 00:10:10.261 "data_size": 0 00:10:10.261 } 00:10:10.261 ] 00:10:10.261 }' 00:10:10.261 19:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:10.261 19:08:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.566 19:08:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:10.566 19:08:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.566 19:08:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.566 [2024-11-27 19:08:20.081769] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:10.566 BaseBdev2 00:10:10.566 19:08:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.566 19:08:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:10.566 19:08:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:10.566 19:08:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:10.566 19:08:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:10.566 19:08:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:10.566 19:08:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:10.566 19:08:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:10.566 19:08:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.566 19:08:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.566 19:08:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.566 19:08:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:10.566 19:08:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.566 19:08:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.566 [ 00:10:10.566 { 00:10:10.566 "name": "BaseBdev2", 00:10:10.566 "aliases": [ 00:10:10.566 "3a119c18-6512-4a9d-820b-80cda938cf81" 00:10:10.566 ], 00:10:10.566 "product_name": "Malloc disk", 00:10:10.566 "block_size": 512, 00:10:10.566 "num_blocks": 65536, 00:10:10.566 "uuid": "3a119c18-6512-4a9d-820b-80cda938cf81", 00:10:10.566 "assigned_rate_limits": { 00:10:10.566 "rw_ios_per_sec": 0, 00:10:10.566 "rw_mbytes_per_sec": 0, 00:10:10.566 "r_mbytes_per_sec": 0, 00:10:10.566 "w_mbytes_per_sec": 0 00:10:10.566 }, 00:10:10.566 "claimed": true, 00:10:10.566 "claim_type": "exclusive_write", 00:10:10.566 "zoned": false, 00:10:10.566 "supported_io_types": { 00:10:10.566 "read": true, 00:10:10.566 "write": true, 00:10:10.566 "unmap": true, 00:10:10.566 "flush": true, 00:10:10.566 "reset": true, 00:10:10.566 "nvme_admin": false, 00:10:10.566 "nvme_io": false, 00:10:10.566 "nvme_io_md": false, 00:10:10.566 "write_zeroes": true, 00:10:10.566 "zcopy": true, 00:10:10.566 "get_zone_info": false, 00:10:10.566 "zone_management": false, 00:10:10.566 "zone_append": false, 00:10:10.566 "compare": false, 00:10:10.566 "compare_and_write": false, 00:10:10.566 "abort": true, 00:10:10.566 "seek_hole": false, 00:10:10.566 "seek_data": false, 00:10:10.566 "copy": true, 00:10:10.566 "nvme_iov_md": false 00:10:10.566 }, 00:10:10.566 "memory_domains": [ 00:10:10.566 { 00:10:10.566 "dma_device_id": "system", 00:10:10.566 "dma_device_type": 1 00:10:10.566 }, 00:10:10.567 { 00:10:10.567 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:10.567 "dma_device_type": 2 00:10:10.567 } 00:10:10.567 ], 00:10:10.567 "driver_specific": {} 00:10:10.567 } 00:10:10.567 ] 00:10:10.567 19:08:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.567 19:08:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:10.567 19:08:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:10.567 19:08:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:10.567 19:08:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:10.567 19:08:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:10.567 19:08:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:10.567 19:08:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:10.567 19:08:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:10.567 19:08:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:10.567 19:08:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:10.567 19:08:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:10.567 19:08:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:10.567 19:08:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:10.567 19:08:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:10.567 19:08:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:10.567 19:08:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.567 19:08:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.567 19:08:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.567 19:08:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:10.567 "name": "Existed_Raid", 00:10:10.567 "uuid": "31f12e19-4a6b-4649-9e34-ed30e0ebd0d9", 00:10:10.567 "strip_size_kb": 64, 00:10:10.567 "state": "configuring", 00:10:10.567 "raid_level": "concat", 00:10:10.567 "superblock": true, 00:10:10.567 "num_base_bdevs": 3, 00:10:10.567 "num_base_bdevs_discovered": 2, 00:10:10.567 "num_base_bdevs_operational": 3, 00:10:10.567 "base_bdevs_list": [ 00:10:10.567 { 00:10:10.567 "name": "BaseBdev1", 00:10:10.567 "uuid": "ea62c4ce-ba96-4d5b-a172-93a6c40d6a77", 00:10:10.567 "is_configured": true, 00:10:10.567 "data_offset": 2048, 00:10:10.567 "data_size": 63488 00:10:10.567 }, 00:10:10.567 { 00:10:10.567 "name": "BaseBdev2", 00:10:10.567 "uuid": "3a119c18-6512-4a9d-820b-80cda938cf81", 00:10:10.567 "is_configured": true, 00:10:10.567 "data_offset": 2048, 00:10:10.567 "data_size": 63488 00:10:10.567 }, 00:10:10.567 { 00:10:10.567 "name": "BaseBdev3", 00:10:10.567 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:10.567 "is_configured": false, 00:10:10.567 "data_offset": 0, 00:10:10.567 "data_size": 0 00:10:10.567 } 00:10:10.567 ] 00:10:10.567 }' 00:10:10.567 19:08:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:10.567 19:08:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.135 19:08:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:11.135 19:08:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.135 19:08:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.135 [2024-11-27 19:08:20.607382] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:11.135 [2024-11-27 19:08:20.607836] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:11.135 [2024-11-27 19:08:20.607904] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:11.135 [2024-11-27 19:08:20.608242] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:11.136 [2024-11-27 19:08:20.608476] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:11.136 [2024-11-27 19:08:20.608521] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:11.136 BaseBdev3 00:10:11.136 [2024-11-27 19:08:20.608748] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:11.136 19:08:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.136 19:08:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:11.136 19:08:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:11.136 19:08:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:11.136 19:08:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:11.136 19:08:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:11.136 19:08:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:11.136 19:08:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:11.136 19:08:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.136 19:08:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.136 19:08:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.136 19:08:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:11.136 19:08:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.136 19:08:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.136 [ 00:10:11.136 { 00:10:11.136 "name": "BaseBdev3", 00:10:11.136 "aliases": [ 00:10:11.136 "2e69de3e-9c21-4fde-aa03-0eeff72ed694" 00:10:11.136 ], 00:10:11.136 "product_name": "Malloc disk", 00:10:11.136 "block_size": 512, 00:10:11.136 "num_blocks": 65536, 00:10:11.136 "uuid": "2e69de3e-9c21-4fde-aa03-0eeff72ed694", 00:10:11.136 "assigned_rate_limits": { 00:10:11.136 "rw_ios_per_sec": 0, 00:10:11.136 "rw_mbytes_per_sec": 0, 00:10:11.136 "r_mbytes_per_sec": 0, 00:10:11.136 "w_mbytes_per_sec": 0 00:10:11.136 }, 00:10:11.136 "claimed": true, 00:10:11.136 "claim_type": "exclusive_write", 00:10:11.136 "zoned": false, 00:10:11.136 "supported_io_types": { 00:10:11.136 "read": true, 00:10:11.136 "write": true, 00:10:11.136 "unmap": true, 00:10:11.136 "flush": true, 00:10:11.136 "reset": true, 00:10:11.136 "nvme_admin": false, 00:10:11.136 "nvme_io": false, 00:10:11.136 "nvme_io_md": false, 00:10:11.136 "write_zeroes": true, 00:10:11.136 "zcopy": true, 00:10:11.136 "get_zone_info": false, 00:10:11.136 "zone_management": false, 00:10:11.136 "zone_append": false, 00:10:11.136 "compare": false, 00:10:11.136 "compare_and_write": false, 00:10:11.136 "abort": true, 00:10:11.136 "seek_hole": false, 00:10:11.136 "seek_data": false, 00:10:11.136 "copy": true, 00:10:11.136 "nvme_iov_md": false 00:10:11.136 }, 00:10:11.136 "memory_domains": [ 00:10:11.136 { 00:10:11.136 "dma_device_id": "system", 00:10:11.136 "dma_device_type": 1 00:10:11.136 }, 00:10:11.136 { 00:10:11.136 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:11.136 "dma_device_type": 2 00:10:11.136 } 00:10:11.136 ], 00:10:11.136 "driver_specific": {} 00:10:11.136 } 00:10:11.136 ] 00:10:11.136 19:08:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.136 19:08:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:11.136 19:08:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:11.136 19:08:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:11.136 19:08:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:10:11.136 19:08:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:11.136 19:08:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:11.136 19:08:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:11.136 19:08:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:11.136 19:08:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:11.136 19:08:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:11.136 19:08:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:11.136 19:08:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:11.136 19:08:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:11.136 19:08:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:11.136 19:08:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:11.136 19:08:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.136 19:08:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.136 19:08:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.136 19:08:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:11.136 "name": "Existed_Raid", 00:10:11.136 "uuid": "31f12e19-4a6b-4649-9e34-ed30e0ebd0d9", 00:10:11.136 "strip_size_kb": 64, 00:10:11.136 "state": "online", 00:10:11.136 "raid_level": "concat", 00:10:11.136 "superblock": true, 00:10:11.136 "num_base_bdevs": 3, 00:10:11.136 "num_base_bdevs_discovered": 3, 00:10:11.136 "num_base_bdevs_operational": 3, 00:10:11.136 "base_bdevs_list": [ 00:10:11.136 { 00:10:11.136 "name": "BaseBdev1", 00:10:11.136 "uuid": "ea62c4ce-ba96-4d5b-a172-93a6c40d6a77", 00:10:11.136 "is_configured": true, 00:10:11.136 "data_offset": 2048, 00:10:11.136 "data_size": 63488 00:10:11.136 }, 00:10:11.136 { 00:10:11.136 "name": "BaseBdev2", 00:10:11.136 "uuid": "3a119c18-6512-4a9d-820b-80cda938cf81", 00:10:11.136 "is_configured": true, 00:10:11.136 "data_offset": 2048, 00:10:11.136 "data_size": 63488 00:10:11.136 }, 00:10:11.136 { 00:10:11.136 "name": "BaseBdev3", 00:10:11.136 "uuid": "2e69de3e-9c21-4fde-aa03-0eeff72ed694", 00:10:11.136 "is_configured": true, 00:10:11.136 "data_offset": 2048, 00:10:11.136 "data_size": 63488 00:10:11.136 } 00:10:11.136 ] 00:10:11.136 }' 00:10:11.136 19:08:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:11.136 19:08:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.707 19:08:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:11.707 19:08:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:11.707 19:08:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:11.707 19:08:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:11.707 19:08:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:11.707 19:08:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:11.707 19:08:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:11.707 19:08:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:11.707 19:08:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.707 19:08:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.707 [2024-11-27 19:08:21.086996] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:11.707 19:08:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.707 19:08:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:11.707 "name": "Existed_Raid", 00:10:11.707 "aliases": [ 00:10:11.707 "31f12e19-4a6b-4649-9e34-ed30e0ebd0d9" 00:10:11.707 ], 00:10:11.707 "product_name": "Raid Volume", 00:10:11.707 "block_size": 512, 00:10:11.707 "num_blocks": 190464, 00:10:11.707 "uuid": "31f12e19-4a6b-4649-9e34-ed30e0ebd0d9", 00:10:11.707 "assigned_rate_limits": { 00:10:11.707 "rw_ios_per_sec": 0, 00:10:11.707 "rw_mbytes_per_sec": 0, 00:10:11.707 "r_mbytes_per_sec": 0, 00:10:11.707 "w_mbytes_per_sec": 0 00:10:11.707 }, 00:10:11.707 "claimed": false, 00:10:11.707 "zoned": false, 00:10:11.707 "supported_io_types": { 00:10:11.707 "read": true, 00:10:11.707 "write": true, 00:10:11.707 "unmap": true, 00:10:11.707 "flush": true, 00:10:11.707 "reset": true, 00:10:11.707 "nvme_admin": false, 00:10:11.707 "nvme_io": false, 00:10:11.707 "nvme_io_md": false, 00:10:11.707 "write_zeroes": true, 00:10:11.707 "zcopy": false, 00:10:11.707 "get_zone_info": false, 00:10:11.707 "zone_management": false, 00:10:11.707 "zone_append": false, 00:10:11.707 "compare": false, 00:10:11.707 "compare_and_write": false, 00:10:11.707 "abort": false, 00:10:11.707 "seek_hole": false, 00:10:11.707 "seek_data": false, 00:10:11.707 "copy": false, 00:10:11.707 "nvme_iov_md": false 00:10:11.707 }, 00:10:11.707 "memory_domains": [ 00:10:11.707 { 00:10:11.707 "dma_device_id": "system", 00:10:11.707 "dma_device_type": 1 00:10:11.707 }, 00:10:11.707 { 00:10:11.707 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:11.707 "dma_device_type": 2 00:10:11.707 }, 00:10:11.707 { 00:10:11.707 "dma_device_id": "system", 00:10:11.707 "dma_device_type": 1 00:10:11.707 }, 00:10:11.707 { 00:10:11.707 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:11.707 "dma_device_type": 2 00:10:11.707 }, 00:10:11.707 { 00:10:11.707 "dma_device_id": "system", 00:10:11.707 "dma_device_type": 1 00:10:11.707 }, 00:10:11.707 { 00:10:11.707 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:11.707 "dma_device_type": 2 00:10:11.707 } 00:10:11.707 ], 00:10:11.707 "driver_specific": { 00:10:11.707 "raid": { 00:10:11.707 "uuid": "31f12e19-4a6b-4649-9e34-ed30e0ebd0d9", 00:10:11.707 "strip_size_kb": 64, 00:10:11.707 "state": "online", 00:10:11.707 "raid_level": "concat", 00:10:11.707 "superblock": true, 00:10:11.707 "num_base_bdevs": 3, 00:10:11.707 "num_base_bdevs_discovered": 3, 00:10:11.707 "num_base_bdevs_operational": 3, 00:10:11.707 "base_bdevs_list": [ 00:10:11.707 { 00:10:11.707 "name": "BaseBdev1", 00:10:11.707 "uuid": "ea62c4ce-ba96-4d5b-a172-93a6c40d6a77", 00:10:11.707 "is_configured": true, 00:10:11.707 "data_offset": 2048, 00:10:11.707 "data_size": 63488 00:10:11.707 }, 00:10:11.707 { 00:10:11.707 "name": "BaseBdev2", 00:10:11.707 "uuid": "3a119c18-6512-4a9d-820b-80cda938cf81", 00:10:11.707 "is_configured": true, 00:10:11.707 "data_offset": 2048, 00:10:11.707 "data_size": 63488 00:10:11.707 }, 00:10:11.707 { 00:10:11.707 "name": "BaseBdev3", 00:10:11.707 "uuid": "2e69de3e-9c21-4fde-aa03-0eeff72ed694", 00:10:11.707 "is_configured": true, 00:10:11.707 "data_offset": 2048, 00:10:11.707 "data_size": 63488 00:10:11.707 } 00:10:11.707 ] 00:10:11.707 } 00:10:11.707 } 00:10:11.707 }' 00:10:11.708 19:08:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:11.708 19:08:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:11.708 BaseBdev2 00:10:11.708 BaseBdev3' 00:10:11.708 19:08:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:11.708 19:08:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:11.708 19:08:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:11.708 19:08:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:11.708 19:08:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.708 19:08:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.708 19:08:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:11.708 19:08:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.708 19:08:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:11.708 19:08:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:11.708 19:08:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:11.708 19:08:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:11.708 19:08:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:11.708 19:08:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.708 19:08:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.708 19:08:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.708 19:08:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:11.708 19:08:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:11.708 19:08:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:11.708 19:08:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:11.708 19:08:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:11.708 19:08:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.708 19:08:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.708 19:08:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.708 19:08:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:11.708 19:08:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:11.708 19:08:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:11.708 19:08:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.708 19:08:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.967 [2024-11-27 19:08:21.346231] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:11.967 [2024-11-27 19:08:21.346308] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:11.967 [2024-11-27 19:08:21.346398] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:11.967 19:08:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.967 19:08:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:11.967 19:08:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:10:11.967 19:08:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:11.967 19:08:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:10:11.967 19:08:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:11.967 19:08:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:10:11.967 19:08:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:11.967 19:08:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:11.967 19:08:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:11.967 19:08:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:11.967 19:08:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:11.967 19:08:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:11.967 19:08:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:11.967 19:08:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:11.967 19:08:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:11.967 19:08:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:11.967 19:08:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:11.967 19:08:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.968 19:08:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.968 19:08:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.968 19:08:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:11.968 "name": "Existed_Raid", 00:10:11.968 "uuid": "31f12e19-4a6b-4649-9e34-ed30e0ebd0d9", 00:10:11.968 "strip_size_kb": 64, 00:10:11.968 "state": "offline", 00:10:11.968 "raid_level": "concat", 00:10:11.968 "superblock": true, 00:10:11.968 "num_base_bdevs": 3, 00:10:11.968 "num_base_bdevs_discovered": 2, 00:10:11.968 "num_base_bdevs_operational": 2, 00:10:11.968 "base_bdevs_list": [ 00:10:11.968 { 00:10:11.968 "name": null, 00:10:11.968 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:11.968 "is_configured": false, 00:10:11.968 "data_offset": 0, 00:10:11.968 "data_size": 63488 00:10:11.968 }, 00:10:11.968 { 00:10:11.968 "name": "BaseBdev2", 00:10:11.968 "uuid": "3a119c18-6512-4a9d-820b-80cda938cf81", 00:10:11.968 "is_configured": true, 00:10:11.968 "data_offset": 2048, 00:10:11.968 "data_size": 63488 00:10:11.968 }, 00:10:11.968 { 00:10:11.968 "name": "BaseBdev3", 00:10:11.968 "uuid": "2e69de3e-9c21-4fde-aa03-0eeff72ed694", 00:10:11.968 "is_configured": true, 00:10:11.968 "data_offset": 2048, 00:10:11.968 "data_size": 63488 00:10:11.968 } 00:10:11.968 ] 00:10:11.968 }' 00:10:11.968 19:08:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:11.968 19:08:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.226 19:08:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:12.226 19:08:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:12.226 19:08:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:12.226 19:08:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.226 19:08:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.226 19:08:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:12.226 19:08:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.487 19:08:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:12.487 19:08:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:12.487 19:08:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:12.487 19:08:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.487 19:08:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.487 [2024-11-27 19:08:21.890379] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:12.487 19:08:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.487 19:08:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:12.487 19:08:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:12.487 19:08:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:12.487 19:08:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:12.487 19:08:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.487 19:08:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.487 19:08:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.487 19:08:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:12.487 19:08:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:12.487 19:08:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:12.487 19:08:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.487 19:08:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.487 [2024-11-27 19:08:22.054968] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:12.487 [2024-11-27 19:08:22.055085] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:12.747 19:08:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.747 19:08:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:12.747 19:08:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:12.747 19:08:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:12.747 19:08:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:12.747 19:08:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.747 19:08:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.747 19:08:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.747 19:08:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:12.747 19:08:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:12.747 19:08:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:10:12.747 19:08:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:12.747 19:08:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:12.747 19:08:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:12.747 19:08:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.747 19:08:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.747 BaseBdev2 00:10:12.747 19:08:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.747 19:08:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:12.747 19:08:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:12.747 19:08:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:12.747 19:08:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:12.747 19:08:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:12.747 19:08:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:12.747 19:08:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:12.747 19:08:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.747 19:08:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.747 19:08:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.747 19:08:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:12.747 19:08:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.747 19:08:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.747 [ 00:10:12.747 { 00:10:12.747 "name": "BaseBdev2", 00:10:12.747 "aliases": [ 00:10:12.747 "14ddeb1c-524e-4a83-8025-a7c372ecb74e" 00:10:12.747 ], 00:10:12.747 "product_name": "Malloc disk", 00:10:12.747 "block_size": 512, 00:10:12.747 "num_blocks": 65536, 00:10:12.747 "uuid": "14ddeb1c-524e-4a83-8025-a7c372ecb74e", 00:10:12.747 "assigned_rate_limits": { 00:10:12.747 "rw_ios_per_sec": 0, 00:10:12.747 "rw_mbytes_per_sec": 0, 00:10:12.747 "r_mbytes_per_sec": 0, 00:10:12.747 "w_mbytes_per_sec": 0 00:10:12.747 }, 00:10:12.747 "claimed": false, 00:10:12.747 "zoned": false, 00:10:12.747 "supported_io_types": { 00:10:12.747 "read": true, 00:10:12.747 "write": true, 00:10:12.747 "unmap": true, 00:10:12.747 "flush": true, 00:10:12.747 "reset": true, 00:10:12.747 "nvme_admin": false, 00:10:12.747 "nvme_io": false, 00:10:12.747 "nvme_io_md": false, 00:10:12.747 "write_zeroes": true, 00:10:12.747 "zcopy": true, 00:10:12.747 "get_zone_info": false, 00:10:12.747 "zone_management": false, 00:10:12.747 "zone_append": false, 00:10:12.747 "compare": false, 00:10:12.747 "compare_and_write": false, 00:10:12.747 "abort": true, 00:10:12.747 "seek_hole": false, 00:10:12.747 "seek_data": false, 00:10:12.747 "copy": true, 00:10:12.747 "nvme_iov_md": false 00:10:12.747 }, 00:10:12.747 "memory_domains": [ 00:10:12.747 { 00:10:12.747 "dma_device_id": "system", 00:10:12.747 "dma_device_type": 1 00:10:12.747 }, 00:10:12.747 { 00:10:12.747 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:12.747 "dma_device_type": 2 00:10:12.747 } 00:10:12.747 ], 00:10:12.747 "driver_specific": {} 00:10:12.747 } 00:10:12.747 ] 00:10:12.747 19:08:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.747 19:08:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:12.747 19:08:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:12.747 19:08:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:12.747 19:08:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:12.747 19:08:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.747 19:08:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.747 BaseBdev3 00:10:12.747 19:08:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.747 19:08:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:12.747 19:08:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:12.747 19:08:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:12.747 19:08:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:12.747 19:08:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:12.747 19:08:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:12.747 19:08:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:12.747 19:08:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.747 19:08:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.747 19:08:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.747 19:08:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:12.747 19:08:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.747 19:08:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.747 [ 00:10:12.747 { 00:10:12.748 "name": "BaseBdev3", 00:10:12.748 "aliases": [ 00:10:12.748 "87c943ba-96af-449b-b160-7363748e8479" 00:10:12.748 ], 00:10:12.748 "product_name": "Malloc disk", 00:10:12.748 "block_size": 512, 00:10:12.748 "num_blocks": 65536, 00:10:12.748 "uuid": "87c943ba-96af-449b-b160-7363748e8479", 00:10:12.748 "assigned_rate_limits": { 00:10:12.748 "rw_ios_per_sec": 0, 00:10:12.748 "rw_mbytes_per_sec": 0, 00:10:12.748 "r_mbytes_per_sec": 0, 00:10:12.748 "w_mbytes_per_sec": 0 00:10:12.748 }, 00:10:12.748 "claimed": false, 00:10:12.748 "zoned": false, 00:10:12.748 "supported_io_types": { 00:10:12.748 "read": true, 00:10:12.748 "write": true, 00:10:12.748 "unmap": true, 00:10:12.748 "flush": true, 00:10:12.748 "reset": true, 00:10:12.748 "nvme_admin": false, 00:10:12.748 "nvme_io": false, 00:10:12.748 "nvme_io_md": false, 00:10:12.748 "write_zeroes": true, 00:10:12.748 "zcopy": true, 00:10:12.748 "get_zone_info": false, 00:10:12.748 "zone_management": false, 00:10:12.748 "zone_append": false, 00:10:12.748 "compare": false, 00:10:12.748 "compare_and_write": false, 00:10:12.748 "abort": true, 00:10:12.748 "seek_hole": false, 00:10:12.748 "seek_data": false, 00:10:12.748 "copy": true, 00:10:12.748 "nvme_iov_md": false 00:10:12.748 }, 00:10:12.748 "memory_domains": [ 00:10:12.748 { 00:10:12.748 "dma_device_id": "system", 00:10:12.748 "dma_device_type": 1 00:10:12.748 }, 00:10:12.748 { 00:10:12.748 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:12.748 "dma_device_type": 2 00:10:12.748 } 00:10:12.748 ], 00:10:12.748 "driver_specific": {} 00:10:12.748 } 00:10:12.748 ] 00:10:13.008 19:08:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.008 19:08:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:13.008 19:08:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:13.008 19:08:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:13.008 19:08:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:13.008 19:08:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.008 19:08:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.008 [2024-11-27 19:08:22.387934] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:13.008 [2024-11-27 19:08:22.388028] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:13.008 [2024-11-27 19:08:22.388076] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:13.008 [2024-11-27 19:08:22.390174] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:13.008 19:08:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.008 19:08:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:13.008 19:08:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:13.008 19:08:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:13.008 19:08:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:13.008 19:08:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:13.009 19:08:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:13.009 19:08:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:13.009 19:08:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:13.009 19:08:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:13.009 19:08:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:13.009 19:08:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.009 19:08:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:13.009 19:08:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.009 19:08:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.009 19:08:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.009 19:08:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:13.009 "name": "Existed_Raid", 00:10:13.009 "uuid": "bded7436-cb08-4fc2-9ba2-4041cffb35bc", 00:10:13.009 "strip_size_kb": 64, 00:10:13.009 "state": "configuring", 00:10:13.009 "raid_level": "concat", 00:10:13.009 "superblock": true, 00:10:13.009 "num_base_bdevs": 3, 00:10:13.009 "num_base_bdevs_discovered": 2, 00:10:13.009 "num_base_bdevs_operational": 3, 00:10:13.009 "base_bdevs_list": [ 00:10:13.009 { 00:10:13.009 "name": "BaseBdev1", 00:10:13.009 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:13.009 "is_configured": false, 00:10:13.009 "data_offset": 0, 00:10:13.009 "data_size": 0 00:10:13.009 }, 00:10:13.009 { 00:10:13.009 "name": "BaseBdev2", 00:10:13.009 "uuid": "14ddeb1c-524e-4a83-8025-a7c372ecb74e", 00:10:13.009 "is_configured": true, 00:10:13.009 "data_offset": 2048, 00:10:13.009 "data_size": 63488 00:10:13.009 }, 00:10:13.009 { 00:10:13.009 "name": "BaseBdev3", 00:10:13.009 "uuid": "87c943ba-96af-449b-b160-7363748e8479", 00:10:13.009 "is_configured": true, 00:10:13.009 "data_offset": 2048, 00:10:13.009 "data_size": 63488 00:10:13.009 } 00:10:13.009 ] 00:10:13.009 }' 00:10:13.009 19:08:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:13.009 19:08:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.269 19:08:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:13.269 19:08:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.269 19:08:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.269 [2024-11-27 19:08:22.867206] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:13.269 19:08:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.269 19:08:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:13.269 19:08:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:13.269 19:08:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:13.269 19:08:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:13.269 19:08:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:13.269 19:08:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:13.269 19:08:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:13.269 19:08:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:13.269 19:08:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:13.269 19:08:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:13.269 19:08:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:13.269 19:08:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.269 19:08:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.269 19:08:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.269 19:08:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.528 19:08:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:13.528 "name": "Existed_Raid", 00:10:13.528 "uuid": "bded7436-cb08-4fc2-9ba2-4041cffb35bc", 00:10:13.528 "strip_size_kb": 64, 00:10:13.528 "state": "configuring", 00:10:13.528 "raid_level": "concat", 00:10:13.529 "superblock": true, 00:10:13.529 "num_base_bdevs": 3, 00:10:13.529 "num_base_bdevs_discovered": 1, 00:10:13.529 "num_base_bdevs_operational": 3, 00:10:13.529 "base_bdevs_list": [ 00:10:13.529 { 00:10:13.529 "name": "BaseBdev1", 00:10:13.529 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:13.529 "is_configured": false, 00:10:13.529 "data_offset": 0, 00:10:13.529 "data_size": 0 00:10:13.529 }, 00:10:13.529 { 00:10:13.529 "name": null, 00:10:13.529 "uuid": "14ddeb1c-524e-4a83-8025-a7c372ecb74e", 00:10:13.529 "is_configured": false, 00:10:13.529 "data_offset": 0, 00:10:13.529 "data_size": 63488 00:10:13.529 }, 00:10:13.529 { 00:10:13.529 "name": "BaseBdev3", 00:10:13.529 "uuid": "87c943ba-96af-449b-b160-7363748e8479", 00:10:13.529 "is_configured": true, 00:10:13.529 "data_offset": 2048, 00:10:13.529 "data_size": 63488 00:10:13.529 } 00:10:13.529 ] 00:10:13.529 }' 00:10:13.529 19:08:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:13.529 19:08:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.788 19:08:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.788 19:08:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.788 19:08:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.788 19:08:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:13.788 19:08:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.788 19:08:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:13.788 19:08:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:13.788 19:08:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.788 19:08:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.788 [2024-11-27 19:08:23.357916] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:13.788 BaseBdev1 00:10:13.788 19:08:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.788 19:08:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:13.788 19:08:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:13.788 19:08:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:13.788 19:08:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:13.788 19:08:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:13.788 19:08:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:13.788 19:08:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:13.788 19:08:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.788 19:08:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.788 19:08:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.788 19:08:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:13.788 19:08:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.788 19:08:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.788 [ 00:10:13.788 { 00:10:13.788 "name": "BaseBdev1", 00:10:13.788 "aliases": [ 00:10:13.788 "bea413a8-4bf1-4c60-9881-b2be972db5b9" 00:10:13.788 ], 00:10:13.788 "product_name": "Malloc disk", 00:10:13.788 "block_size": 512, 00:10:13.788 "num_blocks": 65536, 00:10:13.788 "uuid": "bea413a8-4bf1-4c60-9881-b2be972db5b9", 00:10:13.788 "assigned_rate_limits": { 00:10:13.788 "rw_ios_per_sec": 0, 00:10:13.788 "rw_mbytes_per_sec": 0, 00:10:13.788 "r_mbytes_per_sec": 0, 00:10:13.788 "w_mbytes_per_sec": 0 00:10:13.788 }, 00:10:13.788 "claimed": true, 00:10:13.788 "claim_type": "exclusive_write", 00:10:13.788 "zoned": false, 00:10:13.788 "supported_io_types": { 00:10:13.788 "read": true, 00:10:13.788 "write": true, 00:10:13.788 "unmap": true, 00:10:13.788 "flush": true, 00:10:13.788 "reset": true, 00:10:13.788 "nvme_admin": false, 00:10:13.788 "nvme_io": false, 00:10:13.788 "nvme_io_md": false, 00:10:13.788 "write_zeroes": true, 00:10:13.788 "zcopy": true, 00:10:13.788 "get_zone_info": false, 00:10:13.788 "zone_management": false, 00:10:13.788 "zone_append": false, 00:10:13.788 "compare": false, 00:10:13.788 "compare_and_write": false, 00:10:13.788 "abort": true, 00:10:13.788 "seek_hole": false, 00:10:13.788 "seek_data": false, 00:10:13.788 "copy": true, 00:10:13.788 "nvme_iov_md": false 00:10:13.788 }, 00:10:13.788 "memory_domains": [ 00:10:13.788 { 00:10:13.788 "dma_device_id": "system", 00:10:13.788 "dma_device_type": 1 00:10:13.788 }, 00:10:13.788 { 00:10:13.788 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:13.788 "dma_device_type": 2 00:10:13.788 } 00:10:13.788 ], 00:10:13.788 "driver_specific": {} 00:10:13.788 } 00:10:13.788 ] 00:10:13.788 19:08:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.788 19:08:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:13.788 19:08:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:13.788 19:08:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:13.788 19:08:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:13.788 19:08:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:13.788 19:08:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:13.789 19:08:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:13.789 19:08:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:13.789 19:08:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:13.789 19:08:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:13.789 19:08:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:13.789 19:08:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.789 19:08:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.789 19:08:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.789 19:08:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:13.789 19:08:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.048 19:08:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:14.048 "name": "Existed_Raid", 00:10:14.048 "uuid": "bded7436-cb08-4fc2-9ba2-4041cffb35bc", 00:10:14.048 "strip_size_kb": 64, 00:10:14.048 "state": "configuring", 00:10:14.048 "raid_level": "concat", 00:10:14.048 "superblock": true, 00:10:14.048 "num_base_bdevs": 3, 00:10:14.048 "num_base_bdevs_discovered": 2, 00:10:14.048 "num_base_bdevs_operational": 3, 00:10:14.048 "base_bdevs_list": [ 00:10:14.048 { 00:10:14.048 "name": "BaseBdev1", 00:10:14.048 "uuid": "bea413a8-4bf1-4c60-9881-b2be972db5b9", 00:10:14.048 "is_configured": true, 00:10:14.048 "data_offset": 2048, 00:10:14.048 "data_size": 63488 00:10:14.048 }, 00:10:14.048 { 00:10:14.048 "name": null, 00:10:14.048 "uuid": "14ddeb1c-524e-4a83-8025-a7c372ecb74e", 00:10:14.048 "is_configured": false, 00:10:14.048 "data_offset": 0, 00:10:14.048 "data_size": 63488 00:10:14.048 }, 00:10:14.048 { 00:10:14.048 "name": "BaseBdev3", 00:10:14.048 "uuid": "87c943ba-96af-449b-b160-7363748e8479", 00:10:14.048 "is_configured": true, 00:10:14.048 "data_offset": 2048, 00:10:14.048 "data_size": 63488 00:10:14.048 } 00:10:14.048 ] 00:10:14.048 }' 00:10:14.048 19:08:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:14.048 19:08:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.306 19:08:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:14.306 19:08:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.306 19:08:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.307 19:08:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:14.307 19:08:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.307 19:08:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:14.307 19:08:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:14.307 19:08:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.307 19:08:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.307 [2024-11-27 19:08:23.873127] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:14.307 19:08:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.307 19:08:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:14.307 19:08:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:14.307 19:08:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:14.307 19:08:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:14.307 19:08:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:14.307 19:08:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:14.307 19:08:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:14.307 19:08:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:14.307 19:08:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:14.307 19:08:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:14.307 19:08:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:14.307 19:08:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.307 19:08:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:14.307 19:08:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.307 19:08:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.307 19:08:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:14.307 "name": "Existed_Raid", 00:10:14.307 "uuid": "bded7436-cb08-4fc2-9ba2-4041cffb35bc", 00:10:14.307 "strip_size_kb": 64, 00:10:14.307 "state": "configuring", 00:10:14.307 "raid_level": "concat", 00:10:14.307 "superblock": true, 00:10:14.307 "num_base_bdevs": 3, 00:10:14.307 "num_base_bdevs_discovered": 1, 00:10:14.307 "num_base_bdevs_operational": 3, 00:10:14.307 "base_bdevs_list": [ 00:10:14.307 { 00:10:14.307 "name": "BaseBdev1", 00:10:14.307 "uuid": "bea413a8-4bf1-4c60-9881-b2be972db5b9", 00:10:14.307 "is_configured": true, 00:10:14.307 "data_offset": 2048, 00:10:14.307 "data_size": 63488 00:10:14.307 }, 00:10:14.307 { 00:10:14.307 "name": null, 00:10:14.307 "uuid": "14ddeb1c-524e-4a83-8025-a7c372ecb74e", 00:10:14.307 "is_configured": false, 00:10:14.307 "data_offset": 0, 00:10:14.307 "data_size": 63488 00:10:14.307 }, 00:10:14.307 { 00:10:14.307 "name": null, 00:10:14.307 "uuid": "87c943ba-96af-449b-b160-7363748e8479", 00:10:14.307 "is_configured": false, 00:10:14.307 "data_offset": 0, 00:10:14.307 "data_size": 63488 00:10:14.307 } 00:10:14.307 ] 00:10:14.307 }' 00:10:14.307 19:08:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:14.307 19:08:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.876 19:08:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:14.876 19:08:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:14.876 19:08:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.876 19:08:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.876 19:08:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.876 19:08:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:14.876 19:08:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:14.876 19:08:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.876 19:08:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.876 [2024-11-27 19:08:24.388326] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:14.876 19:08:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.876 19:08:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:14.876 19:08:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:14.876 19:08:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:14.876 19:08:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:14.876 19:08:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:14.876 19:08:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:14.876 19:08:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:14.876 19:08:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:14.876 19:08:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:14.876 19:08:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:14.876 19:08:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:14.876 19:08:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:14.876 19:08:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.876 19:08:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.876 19:08:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.876 19:08:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:14.876 "name": "Existed_Raid", 00:10:14.876 "uuid": "bded7436-cb08-4fc2-9ba2-4041cffb35bc", 00:10:14.876 "strip_size_kb": 64, 00:10:14.876 "state": "configuring", 00:10:14.876 "raid_level": "concat", 00:10:14.876 "superblock": true, 00:10:14.876 "num_base_bdevs": 3, 00:10:14.876 "num_base_bdevs_discovered": 2, 00:10:14.876 "num_base_bdevs_operational": 3, 00:10:14.876 "base_bdevs_list": [ 00:10:14.876 { 00:10:14.876 "name": "BaseBdev1", 00:10:14.876 "uuid": "bea413a8-4bf1-4c60-9881-b2be972db5b9", 00:10:14.876 "is_configured": true, 00:10:14.876 "data_offset": 2048, 00:10:14.876 "data_size": 63488 00:10:14.876 }, 00:10:14.876 { 00:10:14.876 "name": null, 00:10:14.876 "uuid": "14ddeb1c-524e-4a83-8025-a7c372ecb74e", 00:10:14.876 "is_configured": false, 00:10:14.876 "data_offset": 0, 00:10:14.876 "data_size": 63488 00:10:14.876 }, 00:10:14.876 { 00:10:14.876 "name": "BaseBdev3", 00:10:14.876 "uuid": "87c943ba-96af-449b-b160-7363748e8479", 00:10:14.876 "is_configured": true, 00:10:14.876 "data_offset": 2048, 00:10:14.876 "data_size": 63488 00:10:14.876 } 00:10:14.876 ] 00:10:14.876 }' 00:10:14.876 19:08:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:14.876 19:08:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.135 19:08:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:15.135 19:08:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.135 19:08:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:15.395 19:08:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.395 19:08:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.395 19:08:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:15.395 19:08:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:15.395 19:08:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.395 19:08:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.395 [2024-11-27 19:08:24.803644] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:15.395 19:08:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.395 19:08:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:15.395 19:08:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:15.395 19:08:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:15.395 19:08:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:15.395 19:08:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:15.395 19:08:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:15.395 19:08:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:15.395 19:08:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:15.395 19:08:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:15.395 19:08:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:15.395 19:08:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:15.395 19:08:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.395 19:08:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:15.395 19:08:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.395 19:08:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.395 19:08:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:15.395 "name": "Existed_Raid", 00:10:15.395 "uuid": "bded7436-cb08-4fc2-9ba2-4041cffb35bc", 00:10:15.395 "strip_size_kb": 64, 00:10:15.395 "state": "configuring", 00:10:15.395 "raid_level": "concat", 00:10:15.395 "superblock": true, 00:10:15.395 "num_base_bdevs": 3, 00:10:15.395 "num_base_bdevs_discovered": 1, 00:10:15.395 "num_base_bdevs_operational": 3, 00:10:15.395 "base_bdevs_list": [ 00:10:15.395 { 00:10:15.395 "name": null, 00:10:15.395 "uuid": "bea413a8-4bf1-4c60-9881-b2be972db5b9", 00:10:15.395 "is_configured": false, 00:10:15.395 "data_offset": 0, 00:10:15.395 "data_size": 63488 00:10:15.395 }, 00:10:15.395 { 00:10:15.395 "name": null, 00:10:15.395 "uuid": "14ddeb1c-524e-4a83-8025-a7c372ecb74e", 00:10:15.395 "is_configured": false, 00:10:15.395 "data_offset": 0, 00:10:15.395 "data_size": 63488 00:10:15.395 }, 00:10:15.395 { 00:10:15.395 "name": "BaseBdev3", 00:10:15.395 "uuid": "87c943ba-96af-449b-b160-7363748e8479", 00:10:15.395 "is_configured": true, 00:10:15.395 "data_offset": 2048, 00:10:15.395 "data_size": 63488 00:10:15.395 } 00:10:15.395 ] 00:10:15.395 }' 00:10:15.395 19:08:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:15.395 19:08:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.965 19:08:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:15.965 19:08:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.965 19:08:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.965 19:08:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:15.965 19:08:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.965 19:08:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:15.965 19:08:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:15.965 19:08:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.965 19:08:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.965 [2024-11-27 19:08:25.371811] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:15.965 19:08:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.965 19:08:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:15.965 19:08:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:15.965 19:08:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:15.965 19:08:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:15.965 19:08:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:15.965 19:08:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:15.965 19:08:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:15.965 19:08:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:15.965 19:08:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:15.965 19:08:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:15.965 19:08:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:15.965 19:08:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:15.965 19:08:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.965 19:08:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.965 19:08:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.965 19:08:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:15.965 "name": "Existed_Raid", 00:10:15.965 "uuid": "bded7436-cb08-4fc2-9ba2-4041cffb35bc", 00:10:15.965 "strip_size_kb": 64, 00:10:15.965 "state": "configuring", 00:10:15.965 "raid_level": "concat", 00:10:15.965 "superblock": true, 00:10:15.965 "num_base_bdevs": 3, 00:10:15.965 "num_base_bdevs_discovered": 2, 00:10:15.965 "num_base_bdevs_operational": 3, 00:10:15.965 "base_bdevs_list": [ 00:10:15.965 { 00:10:15.965 "name": null, 00:10:15.965 "uuid": "bea413a8-4bf1-4c60-9881-b2be972db5b9", 00:10:15.965 "is_configured": false, 00:10:15.965 "data_offset": 0, 00:10:15.965 "data_size": 63488 00:10:15.965 }, 00:10:15.965 { 00:10:15.965 "name": "BaseBdev2", 00:10:15.965 "uuid": "14ddeb1c-524e-4a83-8025-a7c372ecb74e", 00:10:15.965 "is_configured": true, 00:10:15.965 "data_offset": 2048, 00:10:15.965 "data_size": 63488 00:10:15.965 }, 00:10:15.965 { 00:10:15.965 "name": "BaseBdev3", 00:10:15.965 "uuid": "87c943ba-96af-449b-b160-7363748e8479", 00:10:15.965 "is_configured": true, 00:10:15.965 "data_offset": 2048, 00:10:15.965 "data_size": 63488 00:10:15.965 } 00:10:15.965 ] 00:10:15.965 }' 00:10:15.965 19:08:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:15.965 19:08:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.225 19:08:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.225 19:08:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.225 19:08:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.225 19:08:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:16.225 19:08:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.225 19:08:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:16.225 19:08:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.225 19:08:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.225 19:08:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:16.225 19:08:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.486 19:08:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.486 19:08:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u bea413a8-4bf1-4c60-9881-b2be972db5b9 00:10:16.486 19:08:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.486 19:08:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.486 [2024-11-27 19:08:25.950295] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:16.486 [2024-11-27 19:08:25.950668] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:16.486 [2024-11-27 19:08:25.950746] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:16.486 [2024-11-27 19:08:25.951081] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:16.486 NewBaseBdev 00:10:16.486 [2024-11-27 19:08:25.951301] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:16.486 [2024-11-27 19:08:25.951346] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:16.486 [2024-11-27 19:08:25.951549] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:16.486 19:08:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.486 19:08:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:16.486 19:08:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:10:16.486 19:08:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:16.486 19:08:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:16.486 19:08:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:16.486 19:08:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:16.486 19:08:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:16.486 19:08:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.486 19:08:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.486 19:08:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.486 19:08:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:16.486 19:08:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.486 19:08:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.486 [ 00:10:16.486 { 00:10:16.486 "name": "NewBaseBdev", 00:10:16.486 "aliases": [ 00:10:16.486 "bea413a8-4bf1-4c60-9881-b2be972db5b9" 00:10:16.486 ], 00:10:16.486 "product_name": "Malloc disk", 00:10:16.486 "block_size": 512, 00:10:16.486 "num_blocks": 65536, 00:10:16.486 "uuid": "bea413a8-4bf1-4c60-9881-b2be972db5b9", 00:10:16.486 "assigned_rate_limits": { 00:10:16.486 "rw_ios_per_sec": 0, 00:10:16.486 "rw_mbytes_per_sec": 0, 00:10:16.486 "r_mbytes_per_sec": 0, 00:10:16.486 "w_mbytes_per_sec": 0 00:10:16.486 }, 00:10:16.486 "claimed": true, 00:10:16.486 "claim_type": "exclusive_write", 00:10:16.486 "zoned": false, 00:10:16.486 "supported_io_types": { 00:10:16.486 "read": true, 00:10:16.486 "write": true, 00:10:16.486 "unmap": true, 00:10:16.486 "flush": true, 00:10:16.486 "reset": true, 00:10:16.486 "nvme_admin": false, 00:10:16.486 "nvme_io": false, 00:10:16.486 "nvme_io_md": false, 00:10:16.486 "write_zeroes": true, 00:10:16.486 "zcopy": true, 00:10:16.486 "get_zone_info": false, 00:10:16.486 "zone_management": false, 00:10:16.486 "zone_append": false, 00:10:16.486 "compare": false, 00:10:16.486 "compare_and_write": false, 00:10:16.486 "abort": true, 00:10:16.486 "seek_hole": false, 00:10:16.486 "seek_data": false, 00:10:16.486 "copy": true, 00:10:16.486 "nvme_iov_md": false 00:10:16.486 }, 00:10:16.486 "memory_domains": [ 00:10:16.486 { 00:10:16.486 "dma_device_id": "system", 00:10:16.486 "dma_device_type": 1 00:10:16.486 }, 00:10:16.486 { 00:10:16.486 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:16.486 "dma_device_type": 2 00:10:16.486 } 00:10:16.486 ], 00:10:16.486 "driver_specific": {} 00:10:16.486 } 00:10:16.486 ] 00:10:16.486 19:08:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.486 19:08:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:16.486 19:08:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:10:16.486 19:08:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:16.486 19:08:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:16.486 19:08:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:16.486 19:08:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:16.486 19:08:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:16.486 19:08:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:16.486 19:08:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:16.486 19:08:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:16.486 19:08:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:16.486 19:08:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:16.486 19:08:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.486 19:08:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.486 19:08:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.486 19:08:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.486 19:08:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:16.486 "name": "Existed_Raid", 00:10:16.486 "uuid": "bded7436-cb08-4fc2-9ba2-4041cffb35bc", 00:10:16.486 "strip_size_kb": 64, 00:10:16.486 "state": "online", 00:10:16.486 "raid_level": "concat", 00:10:16.486 "superblock": true, 00:10:16.486 "num_base_bdevs": 3, 00:10:16.486 "num_base_bdevs_discovered": 3, 00:10:16.486 "num_base_bdevs_operational": 3, 00:10:16.486 "base_bdevs_list": [ 00:10:16.486 { 00:10:16.486 "name": "NewBaseBdev", 00:10:16.486 "uuid": "bea413a8-4bf1-4c60-9881-b2be972db5b9", 00:10:16.486 "is_configured": true, 00:10:16.486 "data_offset": 2048, 00:10:16.486 "data_size": 63488 00:10:16.486 }, 00:10:16.486 { 00:10:16.486 "name": "BaseBdev2", 00:10:16.486 "uuid": "14ddeb1c-524e-4a83-8025-a7c372ecb74e", 00:10:16.486 "is_configured": true, 00:10:16.486 "data_offset": 2048, 00:10:16.486 "data_size": 63488 00:10:16.486 }, 00:10:16.486 { 00:10:16.486 "name": "BaseBdev3", 00:10:16.486 "uuid": "87c943ba-96af-449b-b160-7363748e8479", 00:10:16.486 "is_configured": true, 00:10:16.486 "data_offset": 2048, 00:10:16.486 "data_size": 63488 00:10:16.486 } 00:10:16.486 ] 00:10:16.486 }' 00:10:16.486 19:08:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:16.486 19:08:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.057 19:08:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:17.057 19:08:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:17.057 19:08:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:17.057 19:08:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:17.057 19:08:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:17.057 19:08:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:17.057 19:08:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:17.057 19:08:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:17.057 19:08:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.057 19:08:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.057 [2024-11-27 19:08:26.429957] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:17.057 19:08:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.057 19:08:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:17.057 "name": "Existed_Raid", 00:10:17.057 "aliases": [ 00:10:17.057 "bded7436-cb08-4fc2-9ba2-4041cffb35bc" 00:10:17.057 ], 00:10:17.057 "product_name": "Raid Volume", 00:10:17.057 "block_size": 512, 00:10:17.057 "num_blocks": 190464, 00:10:17.057 "uuid": "bded7436-cb08-4fc2-9ba2-4041cffb35bc", 00:10:17.057 "assigned_rate_limits": { 00:10:17.057 "rw_ios_per_sec": 0, 00:10:17.057 "rw_mbytes_per_sec": 0, 00:10:17.057 "r_mbytes_per_sec": 0, 00:10:17.057 "w_mbytes_per_sec": 0 00:10:17.057 }, 00:10:17.057 "claimed": false, 00:10:17.057 "zoned": false, 00:10:17.057 "supported_io_types": { 00:10:17.057 "read": true, 00:10:17.057 "write": true, 00:10:17.057 "unmap": true, 00:10:17.057 "flush": true, 00:10:17.057 "reset": true, 00:10:17.057 "nvme_admin": false, 00:10:17.057 "nvme_io": false, 00:10:17.057 "nvme_io_md": false, 00:10:17.057 "write_zeroes": true, 00:10:17.057 "zcopy": false, 00:10:17.057 "get_zone_info": false, 00:10:17.057 "zone_management": false, 00:10:17.057 "zone_append": false, 00:10:17.057 "compare": false, 00:10:17.057 "compare_and_write": false, 00:10:17.057 "abort": false, 00:10:17.057 "seek_hole": false, 00:10:17.057 "seek_data": false, 00:10:17.057 "copy": false, 00:10:17.057 "nvme_iov_md": false 00:10:17.057 }, 00:10:17.057 "memory_domains": [ 00:10:17.057 { 00:10:17.057 "dma_device_id": "system", 00:10:17.057 "dma_device_type": 1 00:10:17.057 }, 00:10:17.057 { 00:10:17.057 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:17.057 "dma_device_type": 2 00:10:17.057 }, 00:10:17.057 { 00:10:17.057 "dma_device_id": "system", 00:10:17.057 "dma_device_type": 1 00:10:17.057 }, 00:10:17.057 { 00:10:17.057 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:17.057 "dma_device_type": 2 00:10:17.057 }, 00:10:17.057 { 00:10:17.057 "dma_device_id": "system", 00:10:17.057 "dma_device_type": 1 00:10:17.057 }, 00:10:17.057 { 00:10:17.057 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:17.057 "dma_device_type": 2 00:10:17.057 } 00:10:17.057 ], 00:10:17.057 "driver_specific": { 00:10:17.057 "raid": { 00:10:17.057 "uuid": "bded7436-cb08-4fc2-9ba2-4041cffb35bc", 00:10:17.057 "strip_size_kb": 64, 00:10:17.057 "state": "online", 00:10:17.057 "raid_level": "concat", 00:10:17.057 "superblock": true, 00:10:17.057 "num_base_bdevs": 3, 00:10:17.057 "num_base_bdevs_discovered": 3, 00:10:17.057 "num_base_bdevs_operational": 3, 00:10:17.057 "base_bdevs_list": [ 00:10:17.057 { 00:10:17.057 "name": "NewBaseBdev", 00:10:17.057 "uuid": "bea413a8-4bf1-4c60-9881-b2be972db5b9", 00:10:17.057 "is_configured": true, 00:10:17.057 "data_offset": 2048, 00:10:17.057 "data_size": 63488 00:10:17.057 }, 00:10:17.057 { 00:10:17.057 "name": "BaseBdev2", 00:10:17.057 "uuid": "14ddeb1c-524e-4a83-8025-a7c372ecb74e", 00:10:17.057 "is_configured": true, 00:10:17.057 "data_offset": 2048, 00:10:17.057 "data_size": 63488 00:10:17.057 }, 00:10:17.057 { 00:10:17.057 "name": "BaseBdev3", 00:10:17.057 "uuid": "87c943ba-96af-449b-b160-7363748e8479", 00:10:17.057 "is_configured": true, 00:10:17.057 "data_offset": 2048, 00:10:17.057 "data_size": 63488 00:10:17.057 } 00:10:17.057 ] 00:10:17.057 } 00:10:17.057 } 00:10:17.057 }' 00:10:17.057 19:08:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:17.057 19:08:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:17.057 BaseBdev2 00:10:17.057 BaseBdev3' 00:10:17.057 19:08:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:17.057 19:08:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:17.057 19:08:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:17.057 19:08:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:17.057 19:08:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:17.057 19:08:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.057 19:08:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.057 19:08:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.057 19:08:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:17.057 19:08:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:17.057 19:08:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:17.057 19:08:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:17.057 19:08:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:17.057 19:08:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.057 19:08:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.058 19:08:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.058 19:08:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:17.058 19:08:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:17.058 19:08:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:17.058 19:08:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:17.058 19:08:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:17.058 19:08:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.058 19:08:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.058 19:08:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.058 19:08:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:17.058 19:08:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:17.058 19:08:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:17.058 19:08:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.058 19:08:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.058 [2024-11-27 19:08:26.641191] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:17.058 [2024-11-27 19:08:26.641269] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:17.058 [2024-11-27 19:08:26.641404] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:17.058 [2024-11-27 19:08:26.641494] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:17.058 [2024-11-27 19:08:26.641544] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:17.058 19:08:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.058 19:08:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 66326 00:10:17.058 19:08:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 66326 ']' 00:10:17.058 19:08:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 66326 00:10:17.058 19:08:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:10:17.058 19:08:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:17.058 19:08:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66326 00:10:17.058 killing process with pid 66326 00:10:17.058 19:08:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:17.058 19:08:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:17.058 19:08:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66326' 00:10:17.058 19:08:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 66326 00:10:17.058 [2024-11-27 19:08:26.688028] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:17.058 19:08:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 66326 00:10:17.628 [2024-11-27 19:08:27.022434] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:19.009 19:08:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:10:19.009 00:10:19.009 real 0m10.633s 00:10:19.009 user 0m16.516s 00:10:19.009 sys 0m2.015s 00:10:19.009 19:08:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:19.009 ************************************ 00:10:19.009 END TEST raid_state_function_test_sb 00:10:19.009 ************************************ 00:10:19.009 19:08:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.009 19:08:28 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:10:19.009 19:08:28 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:19.009 19:08:28 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:19.009 19:08:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:19.009 ************************************ 00:10:19.009 START TEST raid_superblock_test 00:10:19.009 ************************************ 00:10:19.009 19:08:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 3 00:10:19.010 19:08:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:10:19.010 19:08:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:10:19.010 19:08:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:10:19.010 19:08:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:10:19.010 19:08:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:10:19.010 19:08:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:10:19.010 19:08:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:10:19.010 19:08:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:10:19.010 19:08:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:10:19.010 19:08:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:10:19.010 19:08:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:10:19.010 19:08:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:10:19.010 19:08:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:10:19.010 19:08:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:10:19.010 19:08:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:10:19.010 19:08:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:10:19.010 19:08:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=66946 00:10:19.010 19:08:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 66946 00:10:19.010 19:08:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:10:19.010 19:08:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 66946 ']' 00:10:19.010 19:08:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:19.010 19:08:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:19.010 19:08:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:19.010 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:19.010 19:08:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:19.010 19:08:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.010 [2024-11-27 19:08:28.435259] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:10:19.010 [2024-11-27 19:08:28.435453] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66946 ] 00:10:19.010 [2024-11-27 19:08:28.611523] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:19.269 [2024-11-27 19:08:28.751853] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:19.529 [2024-11-27 19:08:28.993994] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:19.529 [2024-11-27 19:08:28.994180] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:19.788 19:08:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:19.788 19:08:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:10:19.788 19:08:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:10:19.788 19:08:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:19.788 19:08:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:10:19.788 19:08:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:10:19.788 19:08:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:19.788 19:08:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:19.788 19:08:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:19.788 19:08:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:19.788 19:08:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:10:19.788 19:08:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.788 19:08:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.788 malloc1 00:10:19.788 19:08:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.788 19:08:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:19.788 19:08:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.788 19:08:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.788 [2024-11-27 19:08:29.315347] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:19.788 [2024-11-27 19:08:29.315464] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:19.788 [2024-11-27 19:08:29.315496] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:19.788 [2024-11-27 19:08:29.315507] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:19.788 [2024-11-27 19:08:29.318035] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:19.788 [2024-11-27 19:08:29.318075] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:19.788 pt1 00:10:19.788 19:08:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.788 19:08:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:19.788 19:08:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:19.788 19:08:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:10:19.788 19:08:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:10:19.788 19:08:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:19.788 19:08:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:19.788 19:08:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:19.788 19:08:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:19.788 19:08:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:10:19.788 19:08:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.788 19:08:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.788 malloc2 00:10:19.788 19:08:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.788 19:08:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:19.788 19:08:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.788 19:08:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.788 [2024-11-27 19:08:29.380635] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:19.788 [2024-11-27 19:08:29.380756] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:19.788 [2024-11-27 19:08:29.380806] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:19.788 [2024-11-27 19:08:29.380846] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:19.788 [2024-11-27 19:08:29.383301] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:19.788 [2024-11-27 19:08:29.383371] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:19.788 pt2 00:10:19.788 19:08:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.788 19:08:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:19.788 19:08:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:19.788 19:08:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:10:19.788 19:08:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:10:19.788 19:08:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:10:19.788 19:08:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:19.788 19:08:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:19.788 19:08:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:19.788 19:08:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:10:19.788 19:08:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.788 19:08:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.047 malloc3 00:10:20.047 19:08:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.047 19:08:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:20.047 19:08:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.047 19:08:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.047 [2024-11-27 19:08:29.462542] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:20.047 [2024-11-27 19:08:29.462656] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:20.047 [2024-11-27 19:08:29.462708] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:10:20.047 [2024-11-27 19:08:29.462746] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:20.047 [2024-11-27 19:08:29.465214] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:20.047 [2024-11-27 19:08:29.465287] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:20.047 pt3 00:10:20.047 19:08:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.047 19:08:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:20.047 19:08:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:20.047 19:08:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:10:20.047 19:08:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.047 19:08:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.047 [2024-11-27 19:08:29.474583] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:20.047 [2024-11-27 19:08:29.476819] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:20.047 [2024-11-27 19:08:29.476951] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:20.047 [2024-11-27 19:08:29.477145] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:10:20.047 [2024-11-27 19:08:29.477197] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:20.047 [2024-11-27 19:08:29.477494] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:20.047 [2024-11-27 19:08:29.477730] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:10:20.047 [2024-11-27 19:08:29.477772] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:10:20.047 [2024-11-27 19:08:29.477990] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:20.047 19:08:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.047 19:08:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:10:20.047 19:08:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:20.047 19:08:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:20.047 19:08:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:20.047 19:08:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:20.047 19:08:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:20.047 19:08:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:20.047 19:08:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:20.047 19:08:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:20.047 19:08:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:20.047 19:08:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:20.047 19:08:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:20.047 19:08:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.047 19:08:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.047 19:08:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.048 19:08:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:20.048 "name": "raid_bdev1", 00:10:20.048 "uuid": "1920dc13-b4d7-4db6-803f-cc368c2fe743", 00:10:20.048 "strip_size_kb": 64, 00:10:20.048 "state": "online", 00:10:20.048 "raid_level": "concat", 00:10:20.048 "superblock": true, 00:10:20.048 "num_base_bdevs": 3, 00:10:20.048 "num_base_bdevs_discovered": 3, 00:10:20.048 "num_base_bdevs_operational": 3, 00:10:20.048 "base_bdevs_list": [ 00:10:20.048 { 00:10:20.048 "name": "pt1", 00:10:20.048 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:20.048 "is_configured": true, 00:10:20.048 "data_offset": 2048, 00:10:20.048 "data_size": 63488 00:10:20.048 }, 00:10:20.048 { 00:10:20.048 "name": "pt2", 00:10:20.048 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:20.048 "is_configured": true, 00:10:20.048 "data_offset": 2048, 00:10:20.048 "data_size": 63488 00:10:20.048 }, 00:10:20.048 { 00:10:20.048 "name": "pt3", 00:10:20.048 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:20.048 "is_configured": true, 00:10:20.048 "data_offset": 2048, 00:10:20.048 "data_size": 63488 00:10:20.048 } 00:10:20.048 ] 00:10:20.048 }' 00:10:20.048 19:08:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:20.048 19:08:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.307 19:08:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:10:20.307 19:08:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:20.307 19:08:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:20.307 19:08:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:20.307 19:08:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:20.307 19:08:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:20.307 19:08:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:20.307 19:08:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:20.307 19:08:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.307 19:08:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.307 [2024-11-27 19:08:29.922149] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:20.567 19:08:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.567 19:08:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:20.567 "name": "raid_bdev1", 00:10:20.567 "aliases": [ 00:10:20.567 "1920dc13-b4d7-4db6-803f-cc368c2fe743" 00:10:20.567 ], 00:10:20.567 "product_name": "Raid Volume", 00:10:20.567 "block_size": 512, 00:10:20.567 "num_blocks": 190464, 00:10:20.567 "uuid": "1920dc13-b4d7-4db6-803f-cc368c2fe743", 00:10:20.567 "assigned_rate_limits": { 00:10:20.567 "rw_ios_per_sec": 0, 00:10:20.567 "rw_mbytes_per_sec": 0, 00:10:20.567 "r_mbytes_per_sec": 0, 00:10:20.567 "w_mbytes_per_sec": 0 00:10:20.567 }, 00:10:20.567 "claimed": false, 00:10:20.567 "zoned": false, 00:10:20.567 "supported_io_types": { 00:10:20.567 "read": true, 00:10:20.567 "write": true, 00:10:20.567 "unmap": true, 00:10:20.567 "flush": true, 00:10:20.567 "reset": true, 00:10:20.567 "nvme_admin": false, 00:10:20.567 "nvme_io": false, 00:10:20.567 "nvme_io_md": false, 00:10:20.567 "write_zeroes": true, 00:10:20.567 "zcopy": false, 00:10:20.567 "get_zone_info": false, 00:10:20.567 "zone_management": false, 00:10:20.567 "zone_append": false, 00:10:20.567 "compare": false, 00:10:20.567 "compare_and_write": false, 00:10:20.567 "abort": false, 00:10:20.567 "seek_hole": false, 00:10:20.567 "seek_data": false, 00:10:20.567 "copy": false, 00:10:20.567 "nvme_iov_md": false 00:10:20.567 }, 00:10:20.567 "memory_domains": [ 00:10:20.567 { 00:10:20.567 "dma_device_id": "system", 00:10:20.567 "dma_device_type": 1 00:10:20.567 }, 00:10:20.567 { 00:10:20.567 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:20.567 "dma_device_type": 2 00:10:20.567 }, 00:10:20.567 { 00:10:20.567 "dma_device_id": "system", 00:10:20.567 "dma_device_type": 1 00:10:20.567 }, 00:10:20.567 { 00:10:20.567 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:20.567 "dma_device_type": 2 00:10:20.567 }, 00:10:20.567 { 00:10:20.567 "dma_device_id": "system", 00:10:20.567 "dma_device_type": 1 00:10:20.567 }, 00:10:20.567 { 00:10:20.567 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:20.567 "dma_device_type": 2 00:10:20.567 } 00:10:20.567 ], 00:10:20.567 "driver_specific": { 00:10:20.567 "raid": { 00:10:20.567 "uuid": "1920dc13-b4d7-4db6-803f-cc368c2fe743", 00:10:20.567 "strip_size_kb": 64, 00:10:20.567 "state": "online", 00:10:20.567 "raid_level": "concat", 00:10:20.567 "superblock": true, 00:10:20.567 "num_base_bdevs": 3, 00:10:20.567 "num_base_bdevs_discovered": 3, 00:10:20.567 "num_base_bdevs_operational": 3, 00:10:20.567 "base_bdevs_list": [ 00:10:20.567 { 00:10:20.567 "name": "pt1", 00:10:20.567 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:20.567 "is_configured": true, 00:10:20.567 "data_offset": 2048, 00:10:20.567 "data_size": 63488 00:10:20.567 }, 00:10:20.567 { 00:10:20.567 "name": "pt2", 00:10:20.567 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:20.567 "is_configured": true, 00:10:20.567 "data_offset": 2048, 00:10:20.567 "data_size": 63488 00:10:20.567 }, 00:10:20.567 { 00:10:20.567 "name": "pt3", 00:10:20.567 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:20.567 "is_configured": true, 00:10:20.567 "data_offset": 2048, 00:10:20.567 "data_size": 63488 00:10:20.567 } 00:10:20.567 ] 00:10:20.567 } 00:10:20.567 } 00:10:20.567 }' 00:10:20.567 19:08:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:20.568 19:08:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:20.568 pt2 00:10:20.568 pt3' 00:10:20.568 19:08:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:20.568 19:08:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:20.568 19:08:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:20.568 19:08:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:20.568 19:08:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.568 19:08:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.568 19:08:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:20.568 19:08:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.568 19:08:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:20.568 19:08:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:20.568 19:08:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:20.568 19:08:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:20.568 19:08:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:20.568 19:08:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.568 19:08:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.568 19:08:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.568 19:08:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:20.568 19:08:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:20.568 19:08:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:20.568 19:08:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:20.568 19:08:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:20.568 19:08:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.568 19:08:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.568 19:08:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.568 19:08:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:20.568 19:08:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:20.828 19:08:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:20.828 19:08:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.828 19:08:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:10:20.828 19:08:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.828 [2024-11-27 19:08:30.209552] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:20.828 19:08:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.828 19:08:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=1920dc13-b4d7-4db6-803f-cc368c2fe743 00:10:20.828 19:08:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 1920dc13-b4d7-4db6-803f-cc368c2fe743 ']' 00:10:20.828 19:08:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:20.828 19:08:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.828 19:08:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.828 [2024-11-27 19:08:30.237194] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:20.828 [2024-11-27 19:08:30.237263] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:20.828 [2024-11-27 19:08:30.237378] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:20.828 [2024-11-27 19:08:30.237466] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:20.828 [2024-11-27 19:08:30.237515] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:10:20.828 19:08:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.828 19:08:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:20.828 19:08:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.828 19:08:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:10:20.828 19:08:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.828 19:08:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.828 19:08:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:10:20.828 19:08:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:10:20.828 19:08:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:20.828 19:08:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:10:20.828 19:08:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.828 19:08:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.828 19:08:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.828 19:08:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:20.828 19:08:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:10:20.828 19:08:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.828 19:08:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.828 19:08:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.828 19:08:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:20.828 19:08:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:10:20.828 19:08:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.828 19:08:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.828 19:08:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.828 19:08:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:10:20.828 19:08:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.828 19:08:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.828 19:08:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:20.828 19:08:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.828 19:08:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:10:20.828 19:08:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:20.828 19:08:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:10:20.828 19:08:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:20.828 19:08:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:10:20.828 19:08:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:20.828 19:08:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:10:20.829 19:08:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:20.829 19:08:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:20.829 19:08:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.829 19:08:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.829 [2024-11-27 19:08:30.388997] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:20.829 [2024-11-27 19:08:30.391204] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:20.829 [2024-11-27 19:08:30.391321] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:10:20.829 [2024-11-27 19:08:30.391423] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:20.829 [2024-11-27 19:08:30.391532] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:20.829 [2024-11-27 19:08:30.391596] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:10:20.829 [2024-11-27 19:08:30.391667] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:20.829 [2024-11-27 19:08:30.391712] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:10:20.829 request: 00:10:20.829 { 00:10:20.829 "name": "raid_bdev1", 00:10:20.829 "raid_level": "concat", 00:10:20.829 "base_bdevs": [ 00:10:20.829 "malloc1", 00:10:20.829 "malloc2", 00:10:20.829 "malloc3" 00:10:20.829 ], 00:10:20.829 "strip_size_kb": 64, 00:10:20.829 "superblock": false, 00:10:20.829 "method": "bdev_raid_create", 00:10:20.829 "req_id": 1 00:10:20.829 } 00:10:20.829 Got JSON-RPC error response 00:10:20.829 response: 00:10:20.829 { 00:10:20.829 "code": -17, 00:10:20.829 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:20.829 } 00:10:20.829 19:08:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:20.829 19:08:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:10:20.829 19:08:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:20.829 19:08:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:20.829 19:08:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:20.829 19:08:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:20.829 19:08:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.829 19:08:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.829 19:08:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:10:20.829 19:08:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.829 19:08:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:10:20.829 19:08:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:10:20.829 19:08:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:20.829 19:08:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.829 19:08:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.829 [2024-11-27 19:08:30.460834] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:20.829 [2024-11-27 19:08:30.460931] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:20.829 [2024-11-27 19:08:30.460970] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:20.829 [2024-11-27 19:08:30.460998] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:21.089 [2024-11-27 19:08:30.463600] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:21.089 [2024-11-27 19:08:30.463678] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:21.089 [2024-11-27 19:08:30.463827] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:21.089 [2024-11-27 19:08:30.463932] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:21.089 pt1 00:10:21.089 19:08:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.089 19:08:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:10:21.089 19:08:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:21.089 19:08:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:21.089 19:08:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:21.089 19:08:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:21.089 19:08:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:21.089 19:08:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:21.089 19:08:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:21.089 19:08:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:21.089 19:08:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:21.089 19:08:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:21.089 19:08:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:21.089 19:08:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.089 19:08:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.089 19:08:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.089 19:08:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:21.089 "name": "raid_bdev1", 00:10:21.089 "uuid": "1920dc13-b4d7-4db6-803f-cc368c2fe743", 00:10:21.089 "strip_size_kb": 64, 00:10:21.089 "state": "configuring", 00:10:21.089 "raid_level": "concat", 00:10:21.089 "superblock": true, 00:10:21.089 "num_base_bdevs": 3, 00:10:21.089 "num_base_bdevs_discovered": 1, 00:10:21.089 "num_base_bdevs_operational": 3, 00:10:21.089 "base_bdevs_list": [ 00:10:21.089 { 00:10:21.089 "name": "pt1", 00:10:21.089 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:21.089 "is_configured": true, 00:10:21.089 "data_offset": 2048, 00:10:21.089 "data_size": 63488 00:10:21.089 }, 00:10:21.089 { 00:10:21.089 "name": null, 00:10:21.089 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:21.089 "is_configured": false, 00:10:21.089 "data_offset": 2048, 00:10:21.089 "data_size": 63488 00:10:21.089 }, 00:10:21.089 { 00:10:21.089 "name": null, 00:10:21.089 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:21.089 "is_configured": false, 00:10:21.089 "data_offset": 2048, 00:10:21.089 "data_size": 63488 00:10:21.089 } 00:10:21.089 ] 00:10:21.089 }' 00:10:21.089 19:08:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:21.089 19:08:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.349 19:08:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:10:21.349 19:08:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:21.349 19:08:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.349 19:08:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.350 [2024-11-27 19:08:30.928076] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:21.350 [2024-11-27 19:08:30.928206] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:21.350 [2024-11-27 19:08:30.928260] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:10:21.350 [2024-11-27 19:08:30.928295] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:21.350 [2024-11-27 19:08:30.928885] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:21.350 [2024-11-27 19:08:30.928946] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:21.350 [2024-11-27 19:08:30.929085] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:21.350 [2024-11-27 19:08:30.929127] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:21.350 pt2 00:10:21.350 19:08:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.350 19:08:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:10:21.350 19:08:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.350 19:08:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.350 [2024-11-27 19:08:30.940056] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:10:21.350 19:08:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.350 19:08:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:10:21.350 19:08:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:21.350 19:08:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:21.350 19:08:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:21.350 19:08:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:21.350 19:08:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:21.350 19:08:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:21.350 19:08:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:21.350 19:08:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:21.350 19:08:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:21.350 19:08:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:21.350 19:08:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:21.350 19:08:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.350 19:08:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.350 19:08:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.609 19:08:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:21.609 "name": "raid_bdev1", 00:10:21.609 "uuid": "1920dc13-b4d7-4db6-803f-cc368c2fe743", 00:10:21.609 "strip_size_kb": 64, 00:10:21.609 "state": "configuring", 00:10:21.609 "raid_level": "concat", 00:10:21.609 "superblock": true, 00:10:21.609 "num_base_bdevs": 3, 00:10:21.609 "num_base_bdevs_discovered": 1, 00:10:21.609 "num_base_bdevs_operational": 3, 00:10:21.609 "base_bdevs_list": [ 00:10:21.609 { 00:10:21.609 "name": "pt1", 00:10:21.609 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:21.609 "is_configured": true, 00:10:21.609 "data_offset": 2048, 00:10:21.609 "data_size": 63488 00:10:21.609 }, 00:10:21.609 { 00:10:21.609 "name": null, 00:10:21.609 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:21.609 "is_configured": false, 00:10:21.609 "data_offset": 0, 00:10:21.609 "data_size": 63488 00:10:21.609 }, 00:10:21.609 { 00:10:21.609 "name": null, 00:10:21.609 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:21.609 "is_configured": false, 00:10:21.609 "data_offset": 2048, 00:10:21.609 "data_size": 63488 00:10:21.609 } 00:10:21.609 ] 00:10:21.609 }' 00:10:21.609 19:08:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:21.609 19:08:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.869 19:08:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:10:21.869 19:08:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:21.869 19:08:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:21.869 19:08:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.869 19:08:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.869 [2024-11-27 19:08:31.379276] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:21.869 [2024-11-27 19:08:31.379424] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:21.869 [2024-11-27 19:08:31.379465] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:10:21.869 [2024-11-27 19:08:31.379500] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:21.869 [2024-11-27 19:08:31.380107] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:21.869 [2024-11-27 19:08:31.380174] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:21.869 [2024-11-27 19:08:31.380303] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:21.869 [2024-11-27 19:08:31.380363] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:21.869 pt2 00:10:21.869 19:08:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.869 19:08:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:21.869 19:08:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:21.869 19:08:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:21.869 19:08:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.869 19:08:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.869 [2024-11-27 19:08:31.391212] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:21.869 [2024-11-27 19:08:31.391314] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:21.869 [2024-11-27 19:08:31.391347] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:21.869 [2024-11-27 19:08:31.391377] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:21.869 [2024-11-27 19:08:31.391836] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:21.869 [2024-11-27 19:08:31.391899] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:21.869 [2024-11-27 19:08:31.391994] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:21.869 [2024-11-27 19:08:31.392051] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:21.869 [2024-11-27 19:08:31.392211] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:21.869 [2024-11-27 19:08:31.392254] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:21.869 [2024-11-27 19:08:31.392565] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:21.869 [2024-11-27 19:08:31.392779] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:21.869 [2024-11-27 19:08:31.392821] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:10:21.869 [2024-11-27 19:08:31.393026] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:21.869 pt3 00:10:21.869 19:08:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.869 19:08:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:21.869 19:08:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:21.869 19:08:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:10:21.869 19:08:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:21.869 19:08:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:21.869 19:08:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:21.869 19:08:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:21.869 19:08:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:21.869 19:08:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:21.869 19:08:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:21.869 19:08:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:21.869 19:08:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:21.869 19:08:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:21.869 19:08:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:21.869 19:08:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.869 19:08:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.869 19:08:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.869 19:08:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:21.869 "name": "raid_bdev1", 00:10:21.869 "uuid": "1920dc13-b4d7-4db6-803f-cc368c2fe743", 00:10:21.869 "strip_size_kb": 64, 00:10:21.869 "state": "online", 00:10:21.869 "raid_level": "concat", 00:10:21.869 "superblock": true, 00:10:21.869 "num_base_bdevs": 3, 00:10:21.869 "num_base_bdevs_discovered": 3, 00:10:21.869 "num_base_bdevs_operational": 3, 00:10:21.869 "base_bdevs_list": [ 00:10:21.869 { 00:10:21.869 "name": "pt1", 00:10:21.869 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:21.869 "is_configured": true, 00:10:21.869 "data_offset": 2048, 00:10:21.869 "data_size": 63488 00:10:21.869 }, 00:10:21.869 { 00:10:21.869 "name": "pt2", 00:10:21.869 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:21.869 "is_configured": true, 00:10:21.869 "data_offset": 2048, 00:10:21.869 "data_size": 63488 00:10:21.869 }, 00:10:21.869 { 00:10:21.869 "name": "pt3", 00:10:21.869 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:21.869 "is_configured": true, 00:10:21.869 "data_offset": 2048, 00:10:21.869 "data_size": 63488 00:10:21.869 } 00:10:21.869 ] 00:10:21.869 }' 00:10:21.869 19:08:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:21.869 19:08:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.439 19:08:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:10:22.439 19:08:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:22.439 19:08:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:22.439 19:08:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:22.439 19:08:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:22.439 19:08:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:22.439 19:08:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:22.439 19:08:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.439 19:08:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.439 19:08:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:22.439 [2024-11-27 19:08:31.862799] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:22.439 19:08:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.439 19:08:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:22.439 "name": "raid_bdev1", 00:10:22.439 "aliases": [ 00:10:22.439 "1920dc13-b4d7-4db6-803f-cc368c2fe743" 00:10:22.439 ], 00:10:22.439 "product_name": "Raid Volume", 00:10:22.439 "block_size": 512, 00:10:22.439 "num_blocks": 190464, 00:10:22.439 "uuid": "1920dc13-b4d7-4db6-803f-cc368c2fe743", 00:10:22.439 "assigned_rate_limits": { 00:10:22.439 "rw_ios_per_sec": 0, 00:10:22.439 "rw_mbytes_per_sec": 0, 00:10:22.439 "r_mbytes_per_sec": 0, 00:10:22.439 "w_mbytes_per_sec": 0 00:10:22.439 }, 00:10:22.439 "claimed": false, 00:10:22.439 "zoned": false, 00:10:22.439 "supported_io_types": { 00:10:22.439 "read": true, 00:10:22.439 "write": true, 00:10:22.439 "unmap": true, 00:10:22.439 "flush": true, 00:10:22.439 "reset": true, 00:10:22.439 "nvme_admin": false, 00:10:22.439 "nvme_io": false, 00:10:22.439 "nvme_io_md": false, 00:10:22.439 "write_zeroes": true, 00:10:22.439 "zcopy": false, 00:10:22.439 "get_zone_info": false, 00:10:22.439 "zone_management": false, 00:10:22.439 "zone_append": false, 00:10:22.439 "compare": false, 00:10:22.439 "compare_and_write": false, 00:10:22.439 "abort": false, 00:10:22.439 "seek_hole": false, 00:10:22.439 "seek_data": false, 00:10:22.439 "copy": false, 00:10:22.439 "nvme_iov_md": false 00:10:22.439 }, 00:10:22.439 "memory_domains": [ 00:10:22.439 { 00:10:22.439 "dma_device_id": "system", 00:10:22.439 "dma_device_type": 1 00:10:22.439 }, 00:10:22.439 { 00:10:22.439 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:22.439 "dma_device_type": 2 00:10:22.439 }, 00:10:22.439 { 00:10:22.439 "dma_device_id": "system", 00:10:22.439 "dma_device_type": 1 00:10:22.439 }, 00:10:22.439 { 00:10:22.439 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:22.439 "dma_device_type": 2 00:10:22.439 }, 00:10:22.439 { 00:10:22.439 "dma_device_id": "system", 00:10:22.439 "dma_device_type": 1 00:10:22.439 }, 00:10:22.439 { 00:10:22.439 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:22.439 "dma_device_type": 2 00:10:22.439 } 00:10:22.439 ], 00:10:22.439 "driver_specific": { 00:10:22.439 "raid": { 00:10:22.439 "uuid": "1920dc13-b4d7-4db6-803f-cc368c2fe743", 00:10:22.439 "strip_size_kb": 64, 00:10:22.439 "state": "online", 00:10:22.439 "raid_level": "concat", 00:10:22.439 "superblock": true, 00:10:22.439 "num_base_bdevs": 3, 00:10:22.439 "num_base_bdevs_discovered": 3, 00:10:22.439 "num_base_bdevs_operational": 3, 00:10:22.439 "base_bdevs_list": [ 00:10:22.439 { 00:10:22.439 "name": "pt1", 00:10:22.439 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:22.439 "is_configured": true, 00:10:22.439 "data_offset": 2048, 00:10:22.439 "data_size": 63488 00:10:22.439 }, 00:10:22.439 { 00:10:22.439 "name": "pt2", 00:10:22.439 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:22.439 "is_configured": true, 00:10:22.439 "data_offset": 2048, 00:10:22.439 "data_size": 63488 00:10:22.439 }, 00:10:22.439 { 00:10:22.440 "name": "pt3", 00:10:22.440 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:22.440 "is_configured": true, 00:10:22.440 "data_offset": 2048, 00:10:22.440 "data_size": 63488 00:10:22.440 } 00:10:22.440 ] 00:10:22.440 } 00:10:22.440 } 00:10:22.440 }' 00:10:22.440 19:08:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:22.440 19:08:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:22.440 pt2 00:10:22.440 pt3' 00:10:22.440 19:08:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:22.440 19:08:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:22.440 19:08:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:22.440 19:08:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:22.440 19:08:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:22.440 19:08:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.440 19:08:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.440 19:08:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.440 19:08:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:22.440 19:08:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:22.440 19:08:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:22.440 19:08:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:22.440 19:08:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:22.440 19:08:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.440 19:08:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.700 19:08:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.700 19:08:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:22.700 19:08:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:22.700 19:08:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:22.700 19:08:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:22.700 19:08:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.700 19:08:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.700 19:08:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:22.700 19:08:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.700 19:08:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:22.700 19:08:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:22.700 19:08:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:22.700 19:08:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.700 19:08:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.700 19:08:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:10:22.700 [2024-11-27 19:08:32.162145] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:22.700 19:08:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.700 19:08:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 1920dc13-b4d7-4db6-803f-cc368c2fe743 '!=' 1920dc13-b4d7-4db6-803f-cc368c2fe743 ']' 00:10:22.700 19:08:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:10:22.700 19:08:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:22.700 19:08:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:22.700 19:08:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 66946 00:10:22.700 19:08:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 66946 ']' 00:10:22.700 19:08:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 66946 00:10:22.700 19:08:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:10:22.700 19:08:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:22.700 19:08:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66946 00:10:22.700 killing process with pid 66946 00:10:22.700 19:08:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:22.700 19:08:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:22.700 19:08:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66946' 00:10:22.700 19:08:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 66946 00:10:22.700 [2024-11-27 19:08:32.235789] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:22.700 [2024-11-27 19:08:32.235902] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:22.700 19:08:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 66946 00:10:22.700 [2024-11-27 19:08:32.235991] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:22.700 [2024-11-27 19:08:32.236005] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:10:22.960 [2024-11-27 19:08:32.569210] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:24.360 19:08:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:10:24.360 00:10:24.360 real 0m5.463s 00:10:24.360 user 0m7.653s 00:10:24.360 sys 0m0.981s 00:10:24.360 19:08:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:24.360 ************************************ 00:10:24.360 END TEST raid_superblock_test 00:10:24.360 ************************************ 00:10:24.360 19:08:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.360 19:08:33 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 3 read 00:10:24.360 19:08:33 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:24.360 19:08:33 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:24.360 19:08:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:24.360 ************************************ 00:10:24.360 START TEST raid_read_error_test 00:10:24.360 ************************************ 00:10:24.360 19:08:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 3 read 00:10:24.360 19:08:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:10:24.360 19:08:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:10:24.360 19:08:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:10:24.360 19:08:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:24.361 19:08:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:24.361 19:08:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:24.361 19:08:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:24.361 19:08:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:24.361 19:08:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:24.361 19:08:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:24.361 19:08:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:24.361 19:08:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:24.361 19:08:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:24.361 19:08:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:24.361 19:08:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:24.361 19:08:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:24.361 19:08:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:24.361 19:08:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:24.361 19:08:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:24.361 19:08:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:24.361 19:08:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:24.361 19:08:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:10:24.361 19:08:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:24.361 19:08:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:24.361 19:08:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:24.361 19:08:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.l4j0UErGy6 00:10:24.361 19:08:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=67205 00:10:24.361 19:08:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:24.361 19:08:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 67205 00:10:24.361 19:08:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 67205 ']' 00:10:24.361 19:08:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:24.361 19:08:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:24.361 19:08:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:24.361 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:24.361 19:08:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:24.361 19:08:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.620 [2024-11-27 19:08:33.992718] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:10:24.620 [2024-11-27 19:08:33.992875] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67205 ] 00:10:24.620 [2024-11-27 19:08:34.173560] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:24.880 [2024-11-27 19:08:34.311354] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:25.139 [2024-11-27 19:08:34.543288] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:25.139 [2024-11-27 19:08:34.543486] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:25.400 19:08:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:25.400 19:08:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:25.400 19:08:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:25.400 19:08:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:25.400 19:08:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.400 19:08:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.400 BaseBdev1_malloc 00:10:25.400 19:08:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.400 19:08:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:25.400 19:08:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.400 19:08:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.400 true 00:10:25.400 19:08:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.400 19:08:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:25.400 19:08:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.400 19:08:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.400 [2024-11-27 19:08:34.884126] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:25.400 [2024-11-27 19:08:34.884234] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:25.400 [2024-11-27 19:08:34.884274] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:25.400 [2024-11-27 19:08:34.884311] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:25.400 [2024-11-27 19:08:34.886724] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:25.400 [2024-11-27 19:08:34.886799] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:25.400 BaseBdev1 00:10:25.400 19:08:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.400 19:08:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:25.400 19:08:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:25.400 19:08:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.400 19:08:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.400 BaseBdev2_malloc 00:10:25.400 19:08:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.400 19:08:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:25.400 19:08:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.400 19:08:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.400 true 00:10:25.400 19:08:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.400 19:08:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:25.400 19:08:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.400 19:08:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.400 [2024-11-27 19:08:34.957079] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:25.400 [2024-11-27 19:08:34.957139] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:25.400 [2024-11-27 19:08:34.957156] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:25.400 [2024-11-27 19:08:34.957168] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:25.400 [2024-11-27 19:08:34.959568] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:25.400 [2024-11-27 19:08:34.959611] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:25.400 BaseBdev2 00:10:25.400 19:08:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.400 19:08:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:25.400 19:08:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:25.400 19:08:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.400 19:08:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.400 BaseBdev3_malloc 00:10:25.400 19:08:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.400 19:08:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:25.400 19:08:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.400 19:08:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.660 true 00:10:25.660 19:08:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.660 19:08:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:25.660 19:08:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.660 19:08:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.660 [2024-11-27 19:08:35.043168] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:25.660 [2024-11-27 19:08:35.043291] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:25.660 [2024-11-27 19:08:35.043334] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:25.660 [2024-11-27 19:08:35.043404] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:25.660 [2024-11-27 19:08:35.045992] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:25.660 [2024-11-27 19:08:35.046068] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:25.660 BaseBdev3 00:10:25.660 19:08:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.660 19:08:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:10:25.660 19:08:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.660 19:08:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.660 [2024-11-27 19:08:35.055236] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:25.660 [2024-11-27 19:08:35.057369] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:25.660 [2024-11-27 19:08:35.057486] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:25.660 [2024-11-27 19:08:35.057745] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:25.660 [2024-11-27 19:08:35.057793] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:25.660 [2024-11-27 19:08:35.058069] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:10:25.660 [2024-11-27 19:08:35.058278] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:25.660 [2024-11-27 19:08:35.058325] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:10:25.660 [2024-11-27 19:08:35.058511] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:25.660 19:08:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.660 19:08:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:10:25.661 19:08:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:25.661 19:08:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:25.661 19:08:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:25.661 19:08:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:25.661 19:08:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:25.661 19:08:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:25.661 19:08:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:25.661 19:08:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:25.661 19:08:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:25.661 19:08:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:25.661 19:08:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:25.661 19:08:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.661 19:08:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.661 19:08:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.661 19:08:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:25.661 "name": "raid_bdev1", 00:10:25.661 "uuid": "52e71bc1-08ff-4b78-b19c-ad75b836ebd4", 00:10:25.661 "strip_size_kb": 64, 00:10:25.661 "state": "online", 00:10:25.661 "raid_level": "concat", 00:10:25.661 "superblock": true, 00:10:25.661 "num_base_bdevs": 3, 00:10:25.661 "num_base_bdevs_discovered": 3, 00:10:25.661 "num_base_bdevs_operational": 3, 00:10:25.661 "base_bdevs_list": [ 00:10:25.661 { 00:10:25.661 "name": "BaseBdev1", 00:10:25.661 "uuid": "af5030b7-6960-527c-aa1c-6ed8eb2bfe6f", 00:10:25.661 "is_configured": true, 00:10:25.661 "data_offset": 2048, 00:10:25.661 "data_size": 63488 00:10:25.661 }, 00:10:25.661 { 00:10:25.661 "name": "BaseBdev2", 00:10:25.661 "uuid": "696a9a97-c605-5ae5-a71e-e9cfccd5eae7", 00:10:25.661 "is_configured": true, 00:10:25.661 "data_offset": 2048, 00:10:25.661 "data_size": 63488 00:10:25.661 }, 00:10:25.661 { 00:10:25.661 "name": "BaseBdev3", 00:10:25.661 "uuid": "898eb8d6-1707-5eab-ba6d-3a76f754de8b", 00:10:25.661 "is_configured": true, 00:10:25.661 "data_offset": 2048, 00:10:25.661 "data_size": 63488 00:10:25.661 } 00:10:25.661 ] 00:10:25.661 }' 00:10:25.661 19:08:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:25.661 19:08:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.921 19:08:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:25.921 19:08:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:26.181 [2024-11-27 19:08:35.579814] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:10:27.123 19:08:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:10:27.123 19:08:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.123 19:08:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.123 19:08:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.123 19:08:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:27.123 19:08:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:10:27.123 19:08:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:10:27.123 19:08:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:10:27.123 19:08:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:27.123 19:08:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:27.123 19:08:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:27.123 19:08:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:27.123 19:08:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:27.123 19:08:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:27.123 19:08:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:27.123 19:08:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:27.123 19:08:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:27.123 19:08:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:27.123 19:08:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:27.123 19:08:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.123 19:08:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.123 19:08:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.123 19:08:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:27.123 "name": "raid_bdev1", 00:10:27.123 "uuid": "52e71bc1-08ff-4b78-b19c-ad75b836ebd4", 00:10:27.123 "strip_size_kb": 64, 00:10:27.123 "state": "online", 00:10:27.123 "raid_level": "concat", 00:10:27.123 "superblock": true, 00:10:27.123 "num_base_bdevs": 3, 00:10:27.123 "num_base_bdevs_discovered": 3, 00:10:27.123 "num_base_bdevs_operational": 3, 00:10:27.123 "base_bdevs_list": [ 00:10:27.123 { 00:10:27.123 "name": "BaseBdev1", 00:10:27.123 "uuid": "af5030b7-6960-527c-aa1c-6ed8eb2bfe6f", 00:10:27.123 "is_configured": true, 00:10:27.123 "data_offset": 2048, 00:10:27.123 "data_size": 63488 00:10:27.123 }, 00:10:27.123 { 00:10:27.123 "name": "BaseBdev2", 00:10:27.123 "uuid": "696a9a97-c605-5ae5-a71e-e9cfccd5eae7", 00:10:27.123 "is_configured": true, 00:10:27.123 "data_offset": 2048, 00:10:27.123 "data_size": 63488 00:10:27.123 }, 00:10:27.123 { 00:10:27.123 "name": "BaseBdev3", 00:10:27.123 "uuid": "898eb8d6-1707-5eab-ba6d-3a76f754de8b", 00:10:27.123 "is_configured": true, 00:10:27.123 "data_offset": 2048, 00:10:27.123 "data_size": 63488 00:10:27.123 } 00:10:27.123 ] 00:10:27.123 }' 00:10:27.123 19:08:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:27.123 19:08:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.384 19:08:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:27.384 19:08:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.384 19:08:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.384 [2024-11-27 19:08:36.989205] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:27.384 [2024-11-27 19:08:36.989307] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:27.384 [2024-11-27 19:08:36.992084] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:27.384 [2024-11-27 19:08:36.992174] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:27.384 [2024-11-27 19:08:36.992238] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:27.384 [2024-11-27 19:08:36.992282] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:10:27.384 { 00:10:27.384 "results": [ 00:10:27.384 { 00:10:27.384 "job": "raid_bdev1", 00:10:27.384 "core_mask": "0x1", 00:10:27.384 "workload": "randrw", 00:10:27.384 "percentage": 50, 00:10:27.384 "status": "finished", 00:10:27.384 "queue_depth": 1, 00:10:27.384 "io_size": 131072, 00:10:27.384 "runtime": 1.410129, 00:10:27.384 "iops": 13573.93543427587, 00:10:27.384 "mibps": 1696.7419292844838, 00:10:27.384 "io_failed": 1, 00:10:27.384 "io_timeout": 0, 00:10:27.384 "avg_latency_us": 103.39625259893994, 00:10:27.384 "min_latency_us": 26.606113537117903, 00:10:27.384 "max_latency_us": 1387.989519650655 00:10:27.384 } 00:10:27.384 ], 00:10:27.384 "core_count": 1 00:10:27.384 } 00:10:27.384 19:08:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.384 19:08:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 67205 00:10:27.384 19:08:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 67205 ']' 00:10:27.384 19:08:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 67205 00:10:27.384 19:08:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:10:27.384 19:08:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:27.384 19:08:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67205 00:10:27.644 19:08:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:27.644 19:08:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:27.644 killing process with pid 67205 00:10:27.644 19:08:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67205' 00:10:27.644 19:08:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 67205 00:10:27.644 [2024-11-27 19:08:37.035186] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:27.644 19:08:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 67205 00:10:27.903 [2024-11-27 19:08:37.291588] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:29.309 19:08:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.l4j0UErGy6 00:10:29.309 19:08:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:29.309 19:08:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:29.309 19:08:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:10:29.309 19:08:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:10:29.309 19:08:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:29.309 19:08:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:29.309 19:08:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:10:29.309 00:10:29.309 real 0m4.725s 00:10:29.309 user 0m5.473s 00:10:29.309 sys 0m0.678s 00:10:29.309 ************************************ 00:10:29.309 END TEST raid_read_error_test 00:10:29.309 ************************************ 00:10:29.309 19:08:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:29.309 19:08:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.309 19:08:38 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 3 write 00:10:29.309 19:08:38 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:29.309 19:08:38 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:29.309 19:08:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:29.309 ************************************ 00:10:29.309 START TEST raid_write_error_test 00:10:29.309 ************************************ 00:10:29.309 19:08:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 3 write 00:10:29.309 19:08:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:10:29.309 19:08:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:10:29.309 19:08:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:10:29.309 19:08:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:29.309 19:08:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:29.309 19:08:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:29.309 19:08:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:29.309 19:08:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:29.309 19:08:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:29.309 19:08:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:29.309 19:08:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:29.309 19:08:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:29.309 19:08:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:29.309 19:08:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:29.309 19:08:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:29.309 19:08:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:29.309 19:08:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:29.309 19:08:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:29.309 19:08:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:29.309 19:08:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:29.309 19:08:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:29.309 19:08:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:10:29.309 19:08:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:29.309 19:08:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:29.309 19:08:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:29.309 19:08:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.5vccT4Y5tl 00:10:29.309 19:08:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=67350 00:10:29.309 19:08:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 67350 00:10:29.309 19:08:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:29.309 19:08:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 67350 ']' 00:10:29.309 19:08:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:29.309 19:08:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:29.309 19:08:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:29.309 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:29.309 19:08:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:29.309 19:08:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.309 [2024-11-27 19:08:38.786801] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:10:29.309 [2024-11-27 19:08:38.786912] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67350 ] 00:10:29.569 [2024-11-27 19:08:38.961266] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:29.569 [2024-11-27 19:08:39.097106] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:29.827 [2024-11-27 19:08:39.335616] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:29.827 [2024-11-27 19:08:39.335813] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:30.086 19:08:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:30.086 19:08:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:30.086 19:08:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:30.086 19:08:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:30.086 19:08:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.086 19:08:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.086 BaseBdev1_malloc 00:10:30.086 19:08:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.086 19:08:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:30.086 19:08:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.086 19:08:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.086 true 00:10:30.086 19:08:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.086 19:08:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:30.086 19:08:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.086 19:08:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.086 [2024-11-27 19:08:39.668221] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:30.086 [2024-11-27 19:08:39.668329] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:30.086 [2024-11-27 19:08:39.668370] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:30.086 [2024-11-27 19:08:39.668402] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:30.086 [2024-11-27 19:08:39.670886] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:30.086 [2024-11-27 19:08:39.670992] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:30.086 BaseBdev1 00:10:30.087 19:08:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.087 19:08:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:30.087 19:08:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:30.087 19:08:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.087 19:08:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.347 BaseBdev2_malloc 00:10:30.347 19:08:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.347 19:08:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:30.347 19:08:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.347 19:08:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.347 true 00:10:30.347 19:08:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.347 19:08:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:30.347 19:08:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.347 19:08:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.347 [2024-11-27 19:08:39.734707] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:30.347 [2024-11-27 19:08:39.734763] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:30.347 [2024-11-27 19:08:39.734780] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:30.347 [2024-11-27 19:08:39.734792] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:30.347 [2024-11-27 19:08:39.737159] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:30.347 [2024-11-27 19:08:39.737197] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:30.347 BaseBdev2 00:10:30.347 19:08:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.347 19:08:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:30.347 19:08:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:30.347 19:08:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.347 19:08:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.347 BaseBdev3_malloc 00:10:30.347 19:08:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.347 19:08:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:30.347 19:08:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.347 19:08:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.347 true 00:10:30.347 19:08:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.347 19:08:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:30.347 19:08:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.347 19:08:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.347 [2024-11-27 19:08:39.843932] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:30.347 [2024-11-27 19:08:39.844041] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:30.347 [2024-11-27 19:08:39.844079] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:30.347 [2024-11-27 19:08:39.844112] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:30.347 [2024-11-27 19:08:39.846575] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:30.347 [2024-11-27 19:08:39.846652] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:30.347 BaseBdev3 00:10:30.347 19:08:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.347 19:08:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:10:30.347 19:08:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.347 19:08:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.347 [2024-11-27 19:08:39.855984] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:30.347 [2024-11-27 19:08:39.858073] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:30.347 [2024-11-27 19:08:39.858192] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:30.347 [2024-11-27 19:08:39.858441] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:30.347 [2024-11-27 19:08:39.858490] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:30.347 [2024-11-27 19:08:39.858779] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:10:30.347 [2024-11-27 19:08:39.858989] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:30.347 [2024-11-27 19:08:39.859044] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:10:30.347 [2024-11-27 19:08:39.859247] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:30.347 19:08:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.347 19:08:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:10:30.347 19:08:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:30.347 19:08:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:30.347 19:08:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:30.347 19:08:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:30.347 19:08:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:30.347 19:08:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:30.347 19:08:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:30.347 19:08:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:30.347 19:08:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:30.347 19:08:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:30.347 19:08:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:30.347 19:08:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.347 19:08:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.347 19:08:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.347 19:08:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:30.347 "name": "raid_bdev1", 00:10:30.347 "uuid": "1cf60d71-ab80-4920-996f-f9164541f64e", 00:10:30.347 "strip_size_kb": 64, 00:10:30.347 "state": "online", 00:10:30.347 "raid_level": "concat", 00:10:30.347 "superblock": true, 00:10:30.347 "num_base_bdevs": 3, 00:10:30.347 "num_base_bdevs_discovered": 3, 00:10:30.347 "num_base_bdevs_operational": 3, 00:10:30.347 "base_bdevs_list": [ 00:10:30.347 { 00:10:30.347 "name": "BaseBdev1", 00:10:30.347 "uuid": "4251645f-f682-5324-95db-bb70a6f2f181", 00:10:30.347 "is_configured": true, 00:10:30.347 "data_offset": 2048, 00:10:30.347 "data_size": 63488 00:10:30.347 }, 00:10:30.347 { 00:10:30.347 "name": "BaseBdev2", 00:10:30.347 "uuid": "56685b5b-be5f-50f4-be8b-56d1d992f1f7", 00:10:30.347 "is_configured": true, 00:10:30.347 "data_offset": 2048, 00:10:30.347 "data_size": 63488 00:10:30.347 }, 00:10:30.347 { 00:10:30.347 "name": "BaseBdev3", 00:10:30.347 "uuid": "a2391a33-595b-5887-ae7e-ebfce443f525", 00:10:30.347 "is_configured": true, 00:10:30.347 "data_offset": 2048, 00:10:30.347 "data_size": 63488 00:10:30.347 } 00:10:30.347 ] 00:10:30.347 }' 00:10:30.347 19:08:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:30.347 19:08:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.918 19:08:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:30.918 19:08:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:30.918 [2024-11-27 19:08:40.392506] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:10:31.859 19:08:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:10:31.859 19:08:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.859 19:08:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.859 19:08:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.859 19:08:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:31.859 19:08:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:10:31.859 19:08:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:10:31.859 19:08:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:10:31.859 19:08:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:31.859 19:08:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:31.859 19:08:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:31.859 19:08:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:31.859 19:08:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:31.859 19:08:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:31.859 19:08:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:31.859 19:08:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:31.859 19:08:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:31.859 19:08:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:31.859 19:08:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:31.859 19:08:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.859 19:08:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.859 19:08:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.859 19:08:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:31.859 "name": "raid_bdev1", 00:10:31.859 "uuid": "1cf60d71-ab80-4920-996f-f9164541f64e", 00:10:31.859 "strip_size_kb": 64, 00:10:31.859 "state": "online", 00:10:31.859 "raid_level": "concat", 00:10:31.859 "superblock": true, 00:10:31.859 "num_base_bdevs": 3, 00:10:31.859 "num_base_bdevs_discovered": 3, 00:10:31.859 "num_base_bdevs_operational": 3, 00:10:31.859 "base_bdevs_list": [ 00:10:31.859 { 00:10:31.859 "name": "BaseBdev1", 00:10:31.859 "uuid": "4251645f-f682-5324-95db-bb70a6f2f181", 00:10:31.859 "is_configured": true, 00:10:31.859 "data_offset": 2048, 00:10:31.859 "data_size": 63488 00:10:31.859 }, 00:10:31.859 { 00:10:31.859 "name": "BaseBdev2", 00:10:31.859 "uuid": "56685b5b-be5f-50f4-be8b-56d1d992f1f7", 00:10:31.859 "is_configured": true, 00:10:31.859 "data_offset": 2048, 00:10:31.859 "data_size": 63488 00:10:31.859 }, 00:10:31.859 { 00:10:31.859 "name": "BaseBdev3", 00:10:31.859 "uuid": "a2391a33-595b-5887-ae7e-ebfce443f525", 00:10:31.859 "is_configured": true, 00:10:31.859 "data_offset": 2048, 00:10:31.859 "data_size": 63488 00:10:31.859 } 00:10:31.859 ] 00:10:31.859 }' 00:10:31.859 19:08:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:31.859 19:08:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.430 19:08:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:32.430 19:08:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.430 19:08:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.430 [2024-11-27 19:08:41.785593] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:32.430 [2024-11-27 19:08:41.785681] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:32.430 [2024-11-27 19:08:41.788486] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:32.430 [2024-11-27 19:08:41.788582] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:32.430 [2024-11-27 19:08:41.788645] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:32.430 [2024-11-27 19:08:41.788703] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:10:32.430 { 00:10:32.430 "results": [ 00:10:32.430 { 00:10:32.430 "job": "raid_bdev1", 00:10:32.430 "core_mask": "0x1", 00:10:32.430 "workload": "randrw", 00:10:32.430 "percentage": 50, 00:10:32.430 "status": "finished", 00:10:32.430 "queue_depth": 1, 00:10:32.430 "io_size": 131072, 00:10:32.430 "runtime": 1.393783, 00:10:32.430 "iops": 13533.670592911521, 00:10:32.430 "mibps": 1691.7088241139402, 00:10:32.430 "io_failed": 1, 00:10:32.430 "io_timeout": 0, 00:10:32.430 "avg_latency_us": 103.69741806208354, 00:10:32.430 "min_latency_us": 25.2646288209607, 00:10:32.430 "max_latency_us": 1402.2986899563318 00:10:32.430 } 00:10:32.430 ], 00:10:32.430 "core_count": 1 00:10:32.430 } 00:10:32.430 19:08:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.430 19:08:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 67350 00:10:32.430 19:08:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 67350 ']' 00:10:32.430 19:08:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 67350 00:10:32.430 19:08:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:10:32.430 19:08:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:32.430 19:08:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67350 00:10:32.430 killing process with pid 67350 00:10:32.430 19:08:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:32.430 19:08:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:32.430 19:08:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67350' 00:10:32.430 19:08:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 67350 00:10:32.430 [2024-11-27 19:08:41.835582] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:32.430 19:08:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 67350 00:10:32.690 [2024-11-27 19:08:42.088536] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:34.071 19:08:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.5vccT4Y5tl 00:10:34.071 19:08:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:34.071 19:08:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:34.071 19:08:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:10:34.071 19:08:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:10:34.071 ************************************ 00:10:34.071 END TEST raid_write_error_test 00:10:34.071 ************************************ 00:10:34.071 19:08:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:34.071 19:08:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:34.071 19:08:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:10:34.071 00:10:34.071 real 0m4.721s 00:10:34.071 user 0m5.460s 00:10:34.071 sys 0m0.675s 00:10:34.071 19:08:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:34.071 19:08:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.071 19:08:43 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:10:34.071 19:08:43 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:10:34.071 19:08:43 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:34.071 19:08:43 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:34.071 19:08:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:34.071 ************************************ 00:10:34.071 START TEST raid_state_function_test 00:10:34.071 ************************************ 00:10:34.071 19:08:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 3 false 00:10:34.071 19:08:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:10:34.071 19:08:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:10:34.071 19:08:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:10:34.071 19:08:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:34.071 19:08:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:34.071 19:08:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:34.071 19:08:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:34.071 19:08:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:34.071 19:08:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:34.071 19:08:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:34.071 19:08:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:34.071 19:08:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:34.071 19:08:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:34.071 19:08:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:34.071 19:08:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:34.071 19:08:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:34.071 19:08:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:34.071 19:08:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:34.071 19:08:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:34.071 19:08:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:34.071 19:08:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:34.071 19:08:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:10:34.071 19:08:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:10:34.071 19:08:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:10:34.072 19:08:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:10:34.072 Process raid pid: 67494 00:10:34.072 19:08:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=67494 00:10:34.072 19:08:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:34.072 19:08:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 67494' 00:10:34.072 19:08:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 67494 00:10:34.072 19:08:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 67494 ']' 00:10:34.072 19:08:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:34.072 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:34.072 19:08:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:34.072 19:08:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:34.072 19:08:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:34.072 19:08:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.072 [2024-11-27 19:08:43.568602] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:10:34.072 [2024-11-27 19:08:43.568734] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:34.360 [2024-11-27 19:08:43.741471] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:34.360 [2024-11-27 19:08:43.881992] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:34.620 [2024-11-27 19:08:44.120172] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:34.620 [2024-11-27 19:08:44.120209] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:34.879 19:08:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:34.879 19:08:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:10:34.879 19:08:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:34.879 19:08:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.879 19:08:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.879 [2024-11-27 19:08:44.401316] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:34.879 [2024-11-27 19:08:44.401385] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:34.879 [2024-11-27 19:08:44.401396] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:34.879 [2024-11-27 19:08:44.401406] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:34.879 [2024-11-27 19:08:44.401412] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:34.879 [2024-11-27 19:08:44.401421] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:34.879 19:08:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.879 19:08:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:34.879 19:08:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:34.879 19:08:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:34.879 19:08:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:34.879 19:08:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:34.879 19:08:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:34.880 19:08:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:34.880 19:08:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:34.880 19:08:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:34.880 19:08:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:34.880 19:08:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:34.880 19:08:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.880 19:08:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.880 19:08:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:34.880 19:08:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.880 19:08:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:34.880 "name": "Existed_Raid", 00:10:34.880 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:34.880 "strip_size_kb": 0, 00:10:34.880 "state": "configuring", 00:10:34.880 "raid_level": "raid1", 00:10:34.880 "superblock": false, 00:10:34.880 "num_base_bdevs": 3, 00:10:34.880 "num_base_bdevs_discovered": 0, 00:10:34.880 "num_base_bdevs_operational": 3, 00:10:34.880 "base_bdevs_list": [ 00:10:34.880 { 00:10:34.880 "name": "BaseBdev1", 00:10:34.880 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:34.880 "is_configured": false, 00:10:34.880 "data_offset": 0, 00:10:34.880 "data_size": 0 00:10:34.880 }, 00:10:34.880 { 00:10:34.880 "name": "BaseBdev2", 00:10:34.880 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:34.880 "is_configured": false, 00:10:34.880 "data_offset": 0, 00:10:34.880 "data_size": 0 00:10:34.880 }, 00:10:34.880 { 00:10:34.880 "name": "BaseBdev3", 00:10:34.880 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:34.880 "is_configured": false, 00:10:34.880 "data_offset": 0, 00:10:34.880 "data_size": 0 00:10:34.880 } 00:10:34.880 ] 00:10:34.880 }' 00:10:34.880 19:08:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:34.880 19:08:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.449 19:08:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:35.449 19:08:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.449 19:08:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.449 [2024-11-27 19:08:44.840541] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:35.449 [2024-11-27 19:08:44.840636] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:35.449 19:08:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.449 19:08:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:35.449 19:08:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.449 19:08:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.449 [2024-11-27 19:08:44.852488] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:35.449 [2024-11-27 19:08:44.852579] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:35.449 [2024-11-27 19:08:44.852611] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:35.449 [2024-11-27 19:08:44.852647] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:35.449 [2024-11-27 19:08:44.852673] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:35.449 [2024-11-27 19:08:44.852719] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:35.449 19:08:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.449 19:08:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:35.449 19:08:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.449 19:08:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.449 [2024-11-27 19:08:44.908191] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:35.449 BaseBdev1 00:10:35.449 19:08:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.449 19:08:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:35.449 19:08:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:35.449 19:08:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:35.449 19:08:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:35.449 19:08:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:35.449 19:08:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:35.449 19:08:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:35.449 19:08:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.449 19:08:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.449 19:08:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.449 19:08:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:35.449 19:08:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.449 19:08:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.449 [ 00:10:35.450 { 00:10:35.450 "name": "BaseBdev1", 00:10:35.450 "aliases": [ 00:10:35.450 "21312fb1-3307-4b49-bf28-f6310e5680e3" 00:10:35.450 ], 00:10:35.450 "product_name": "Malloc disk", 00:10:35.450 "block_size": 512, 00:10:35.450 "num_blocks": 65536, 00:10:35.450 "uuid": "21312fb1-3307-4b49-bf28-f6310e5680e3", 00:10:35.450 "assigned_rate_limits": { 00:10:35.450 "rw_ios_per_sec": 0, 00:10:35.450 "rw_mbytes_per_sec": 0, 00:10:35.450 "r_mbytes_per_sec": 0, 00:10:35.450 "w_mbytes_per_sec": 0 00:10:35.450 }, 00:10:35.450 "claimed": true, 00:10:35.450 "claim_type": "exclusive_write", 00:10:35.450 "zoned": false, 00:10:35.450 "supported_io_types": { 00:10:35.450 "read": true, 00:10:35.450 "write": true, 00:10:35.450 "unmap": true, 00:10:35.450 "flush": true, 00:10:35.450 "reset": true, 00:10:35.450 "nvme_admin": false, 00:10:35.450 "nvme_io": false, 00:10:35.450 "nvme_io_md": false, 00:10:35.450 "write_zeroes": true, 00:10:35.450 "zcopy": true, 00:10:35.450 "get_zone_info": false, 00:10:35.450 "zone_management": false, 00:10:35.450 "zone_append": false, 00:10:35.450 "compare": false, 00:10:35.450 "compare_and_write": false, 00:10:35.450 "abort": true, 00:10:35.450 "seek_hole": false, 00:10:35.450 "seek_data": false, 00:10:35.450 "copy": true, 00:10:35.450 "nvme_iov_md": false 00:10:35.450 }, 00:10:35.450 "memory_domains": [ 00:10:35.450 { 00:10:35.450 "dma_device_id": "system", 00:10:35.450 "dma_device_type": 1 00:10:35.450 }, 00:10:35.450 { 00:10:35.450 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:35.450 "dma_device_type": 2 00:10:35.450 } 00:10:35.450 ], 00:10:35.450 "driver_specific": {} 00:10:35.450 } 00:10:35.450 ] 00:10:35.450 19:08:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.450 19:08:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:35.450 19:08:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:35.450 19:08:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:35.450 19:08:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:35.450 19:08:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:35.450 19:08:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:35.450 19:08:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:35.450 19:08:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:35.450 19:08:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:35.450 19:08:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:35.450 19:08:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:35.450 19:08:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:35.450 19:08:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.450 19:08:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.450 19:08:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:35.450 19:08:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.450 19:08:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:35.450 "name": "Existed_Raid", 00:10:35.450 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:35.450 "strip_size_kb": 0, 00:10:35.450 "state": "configuring", 00:10:35.450 "raid_level": "raid1", 00:10:35.450 "superblock": false, 00:10:35.450 "num_base_bdevs": 3, 00:10:35.450 "num_base_bdevs_discovered": 1, 00:10:35.450 "num_base_bdevs_operational": 3, 00:10:35.450 "base_bdevs_list": [ 00:10:35.450 { 00:10:35.450 "name": "BaseBdev1", 00:10:35.450 "uuid": "21312fb1-3307-4b49-bf28-f6310e5680e3", 00:10:35.450 "is_configured": true, 00:10:35.450 "data_offset": 0, 00:10:35.450 "data_size": 65536 00:10:35.450 }, 00:10:35.450 { 00:10:35.450 "name": "BaseBdev2", 00:10:35.450 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:35.450 "is_configured": false, 00:10:35.450 "data_offset": 0, 00:10:35.450 "data_size": 0 00:10:35.450 }, 00:10:35.450 { 00:10:35.450 "name": "BaseBdev3", 00:10:35.450 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:35.450 "is_configured": false, 00:10:35.450 "data_offset": 0, 00:10:35.450 "data_size": 0 00:10:35.450 } 00:10:35.450 ] 00:10:35.450 }' 00:10:35.450 19:08:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:35.450 19:08:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.046 19:08:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:36.046 19:08:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.046 19:08:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.046 [2024-11-27 19:08:45.371468] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:36.046 [2024-11-27 19:08:45.371605] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:36.046 19:08:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.046 19:08:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:36.046 19:08:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.046 19:08:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.046 [2024-11-27 19:08:45.383515] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:36.046 [2024-11-27 19:08:45.385673] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:36.046 [2024-11-27 19:08:45.385793] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:36.046 [2024-11-27 19:08:45.385827] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:36.046 [2024-11-27 19:08:45.385849] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:36.046 19:08:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.046 19:08:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:36.046 19:08:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:36.046 19:08:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:36.046 19:08:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:36.046 19:08:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:36.046 19:08:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:36.046 19:08:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:36.046 19:08:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:36.046 19:08:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:36.046 19:08:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:36.046 19:08:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:36.046 19:08:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:36.046 19:08:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:36.046 19:08:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:36.046 19:08:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.046 19:08:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.046 19:08:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.046 19:08:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:36.046 "name": "Existed_Raid", 00:10:36.046 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:36.046 "strip_size_kb": 0, 00:10:36.046 "state": "configuring", 00:10:36.046 "raid_level": "raid1", 00:10:36.046 "superblock": false, 00:10:36.046 "num_base_bdevs": 3, 00:10:36.046 "num_base_bdevs_discovered": 1, 00:10:36.046 "num_base_bdevs_operational": 3, 00:10:36.046 "base_bdevs_list": [ 00:10:36.046 { 00:10:36.046 "name": "BaseBdev1", 00:10:36.046 "uuid": "21312fb1-3307-4b49-bf28-f6310e5680e3", 00:10:36.046 "is_configured": true, 00:10:36.046 "data_offset": 0, 00:10:36.046 "data_size": 65536 00:10:36.046 }, 00:10:36.046 { 00:10:36.046 "name": "BaseBdev2", 00:10:36.046 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:36.046 "is_configured": false, 00:10:36.046 "data_offset": 0, 00:10:36.046 "data_size": 0 00:10:36.046 }, 00:10:36.046 { 00:10:36.046 "name": "BaseBdev3", 00:10:36.046 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:36.046 "is_configured": false, 00:10:36.046 "data_offset": 0, 00:10:36.046 "data_size": 0 00:10:36.046 } 00:10:36.046 ] 00:10:36.046 }' 00:10:36.046 19:08:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:36.046 19:08:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.309 19:08:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:36.309 19:08:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.309 19:08:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.309 [2024-11-27 19:08:45.877492] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:36.309 BaseBdev2 00:10:36.309 19:08:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.309 19:08:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:36.309 19:08:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:36.309 19:08:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:36.309 19:08:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:36.309 19:08:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:36.309 19:08:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:36.309 19:08:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:36.309 19:08:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.309 19:08:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.309 19:08:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.309 19:08:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:36.309 19:08:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.309 19:08:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.309 [ 00:10:36.309 { 00:10:36.309 "name": "BaseBdev2", 00:10:36.309 "aliases": [ 00:10:36.309 "9693ffa6-c8c7-463a-b0ef-ab367672fb63" 00:10:36.309 ], 00:10:36.309 "product_name": "Malloc disk", 00:10:36.309 "block_size": 512, 00:10:36.309 "num_blocks": 65536, 00:10:36.309 "uuid": "9693ffa6-c8c7-463a-b0ef-ab367672fb63", 00:10:36.309 "assigned_rate_limits": { 00:10:36.309 "rw_ios_per_sec": 0, 00:10:36.309 "rw_mbytes_per_sec": 0, 00:10:36.309 "r_mbytes_per_sec": 0, 00:10:36.309 "w_mbytes_per_sec": 0 00:10:36.309 }, 00:10:36.309 "claimed": true, 00:10:36.309 "claim_type": "exclusive_write", 00:10:36.309 "zoned": false, 00:10:36.309 "supported_io_types": { 00:10:36.309 "read": true, 00:10:36.309 "write": true, 00:10:36.309 "unmap": true, 00:10:36.309 "flush": true, 00:10:36.309 "reset": true, 00:10:36.309 "nvme_admin": false, 00:10:36.309 "nvme_io": false, 00:10:36.309 "nvme_io_md": false, 00:10:36.309 "write_zeroes": true, 00:10:36.309 "zcopy": true, 00:10:36.309 "get_zone_info": false, 00:10:36.309 "zone_management": false, 00:10:36.309 "zone_append": false, 00:10:36.309 "compare": false, 00:10:36.309 "compare_and_write": false, 00:10:36.309 "abort": true, 00:10:36.309 "seek_hole": false, 00:10:36.309 "seek_data": false, 00:10:36.309 "copy": true, 00:10:36.309 "nvme_iov_md": false 00:10:36.309 }, 00:10:36.309 "memory_domains": [ 00:10:36.309 { 00:10:36.309 "dma_device_id": "system", 00:10:36.309 "dma_device_type": 1 00:10:36.309 }, 00:10:36.309 { 00:10:36.309 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:36.309 "dma_device_type": 2 00:10:36.309 } 00:10:36.309 ], 00:10:36.309 "driver_specific": {} 00:10:36.309 } 00:10:36.309 ] 00:10:36.309 19:08:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.309 19:08:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:36.309 19:08:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:36.309 19:08:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:36.309 19:08:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:36.309 19:08:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:36.309 19:08:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:36.309 19:08:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:36.309 19:08:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:36.309 19:08:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:36.309 19:08:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:36.309 19:08:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:36.309 19:08:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:36.309 19:08:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:36.309 19:08:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:36.309 19:08:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.309 19:08:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.309 19:08:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:36.309 19:08:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.569 19:08:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:36.569 "name": "Existed_Raid", 00:10:36.569 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:36.569 "strip_size_kb": 0, 00:10:36.569 "state": "configuring", 00:10:36.569 "raid_level": "raid1", 00:10:36.569 "superblock": false, 00:10:36.569 "num_base_bdevs": 3, 00:10:36.569 "num_base_bdevs_discovered": 2, 00:10:36.569 "num_base_bdevs_operational": 3, 00:10:36.569 "base_bdevs_list": [ 00:10:36.569 { 00:10:36.569 "name": "BaseBdev1", 00:10:36.569 "uuid": "21312fb1-3307-4b49-bf28-f6310e5680e3", 00:10:36.569 "is_configured": true, 00:10:36.569 "data_offset": 0, 00:10:36.569 "data_size": 65536 00:10:36.569 }, 00:10:36.569 { 00:10:36.569 "name": "BaseBdev2", 00:10:36.569 "uuid": "9693ffa6-c8c7-463a-b0ef-ab367672fb63", 00:10:36.569 "is_configured": true, 00:10:36.569 "data_offset": 0, 00:10:36.569 "data_size": 65536 00:10:36.569 }, 00:10:36.569 { 00:10:36.569 "name": "BaseBdev3", 00:10:36.569 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:36.569 "is_configured": false, 00:10:36.569 "data_offset": 0, 00:10:36.569 "data_size": 0 00:10:36.569 } 00:10:36.569 ] 00:10:36.569 }' 00:10:36.569 19:08:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:36.569 19:08:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.829 19:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:36.829 19:08:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.829 19:08:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.829 [2024-11-27 19:08:46.401328] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:36.829 [2024-11-27 19:08:46.401484] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:36.829 [2024-11-27 19:08:46.401505] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:10:36.829 [2024-11-27 19:08:46.401861] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:36.829 [2024-11-27 19:08:46.402073] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:36.829 [2024-11-27 19:08:46.402084] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:36.829 [2024-11-27 19:08:46.402393] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:36.829 BaseBdev3 00:10:36.829 19:08:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.829 19:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:36.829 19:08:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:36.829 19:08:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:36.829 19:08:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:36.829 19:08:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:36.829 19:08:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:36.829 19:08:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:36.829 19:08:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.829 19:08:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.829 19:08:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.829 19:08:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:36.829 19:08:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.829 19:08:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.829 [ 00:10:36.829 { 00:10:36.829 "name": "BaseBdev3", 00:10:36.829 "aliases": [ 00:10:36.829 "6f0a0e07-807d-4306-97e9-987946bd8201" 00:10:36.829 ], 00:10:36.829 "product_name": "Malloc disk", 00:10:36.829 "block_size": 512, 00:10:36.829 "num_blocks": 65536, 00:10:36.829 "uuid": "6f0a0e07-807d-4306-97e9-987946bd8201", 00:10:36.829 "assigned_rate_limits": { 00:10:36.829 "rw_ios_per_sec": 0, 00:10:36.829 "rw_mbytes_per_sec": 0, 00:10:36.829 "r_mbytes_per_sec": 0, 00:10:36.829 "w_mbytes_per_sec": 0 00:10:36.829 }, 00:10:36.829 "claimed": true, 00:10:36.829 "claim_type": "exclusive_write", 00:10:36.829 "zoned": false, 00:10:36.829 "supported_io_types": { 00:10:36.829 "read": true, 00:10:36.829 "write": true, 00:10:36.829 "unmap": true, 00:10:36.829 "flush": true, 00:10:36.829 "reset": true, 00:10:36.829 "nvme_admin": false, 00:10:36.829 "nvme_io": false, 00:10:36.829 "nvme_io_md": false, 00:10:36.829 "write_zeroes": true, 00:10:36.829 "zcopy": true, 00:10:36.829 "get_zone_info": false, 00:10:36.829 "zone_management": false, 00:10:36.829 "zone_append": false, 00:10:36.829 "compare": false, 00:10:36.829 "compare_and_write": false, 00:10:36.829 "abort": true, 00:10:36.829 "seek_hole": false, 00:10:36.829 "seek_data": false, 00:10:36.829 "copy": true, 00:10:36.829 "nvme_iov_md": false 00:10:36.829 }, 00:10:36.829 "memory_domains": [ 00:10:36.829 { 00:10:36.829 "dma_device_id": "system", 00:10:36.829 "dma_device_type": 1 00:10:36.829 }, 00:10:36.829 { 00:10:36.829 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:36.829 "dma_device_type": 2 00:10:36.829 } 00:10:36.829 ], 00:10:36.829 "driver_specific": {} 00:10:36.829 } 00:10:36.829 ] 00:10:36.829 19:08:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.829 19:08:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:36.829 19:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:36.829 19:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:36.829 19:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:10:36.829 19:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:36.829 19:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:36.829 19:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:36.829 19:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:36.829 19:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:36.829 19:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:36.829 19:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:36.829 19:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:36.829 19:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:36.829 19:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:36.829 19:08:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.829 19:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:36.829 19:08:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.089 19:08:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.089 19:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:37.089 "name": "Existed_Raid", 00:10:37.089 "uuid": "e3bda160-c159-4669-b14c-18754a67ab36", 00:10:37.089 "strip_size_kb": 0, 00:10:37.089 "state": "online", 00:10:37.089 "raid_level": "raid1", 00:10:37.089 "superblock": false, 00:10:37.089 "num_base_bdevs": 3, 00:10:37.089 "num_base_bdevs_discovered": 3, 00:10:37.089 "num_base_bdevs_operational": 3, 00:10:37.089 "base_bdevs_list": [ 00:10:37.089 { 00:10:37.089 "name": "BaseBdev1", 00:10:37.089 "uuid": "21312fb1-3307-4b49-bf28-f6310e5680e3", 00:10:37.089 "is_configured": true, 00:10:37.089 "data_offset": 0, 00:10:37.089 "data_size": 65536 00:10:37.089 }, 00:10:37.089 { 00:10:37.089 "name": "BaseBdev2", 00:10:37.089 "uuid": "9693ffa6-c8c7-463a-b0ef-ab367672fb63", 00:10:37.089 "is_configured": true, 00:10:37.089 "data_offset": 0, 00:10:37.089 "data_size": 65536 00:10:37.089 }, 00:10:37.089 { 00:10:37.089 "name": "BaseBdev3", 00:10:37.089 "uuid": "6f0a0e07-807d-4306-97e9-987946bd8201", 00:10:37.089 "is_configured": true, 00:10:37.089 "data_offset": 0, 00:10:37.089 "data_size": 65536 00:10:37.089 } 00:10:37.089 ] 00:10:37.089 }' 00:10:37.089 19:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:37.089 19:08:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.349 19:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:37.349 19:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:37.349 19:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:37.349 19:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:37.349 19:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:37.349 19:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:37.349 19:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:37.349 19:08:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.349 19:08:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.349 19:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:37.349 [2024-11-27 19:08:46.876931] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:37.349 19:08:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.349 19:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:37.349 "name": "Existed_Raid", 00:10:37.349 "aliases": [ 00:10:37.349 "e3bda160-c159-4669-b14c-18754a67ab36" 00:10:37.349 ], 00:10:37.349 "product_name": "Raid Volume", 00:10:37.349 "block_size": 512, 00:10:37.349 "num_blocks": 65536, 00:10:37.349 "uuid": "e3bda160-c159-4669-b14c-18754a67ab36", 00:10:37.349 "assigned_rate_limits": { 00:10:37.349 "rw_ios_per_sec": 0, 00:10:37.349 "rw_mbytes_per_sec": 0, 00:10:37.349 "r_mbytes_per_sec": 0, 00:10:37.349 "w_mbytes_per_sec": 0 00:10:37.349 }, 00:10:37.349 "claimed": false, 00:10:37.349 "zoned": false, 00:10:37.349 "supported_io_types": { 00:10:37.349 "read": true, 00:10:37.349 "write": true, 00:10:37.349 "unmap": false, 00:10:37.349 "flush": false, 00:10:37.349 "reset": true, 00:10:37.349 "nvme_admin": false, 00:10:37.349 "nvme_io": false, 00:10:37.349 "nvme_io_md": false, 00:10:37.349 "write_zeroes": true, 00:10:37.349 "zcopy": false, 00:10:37.349 "get_zone_info": false, 00:10:37.349 "zone_management": false, 00:10:37.349 "zone_append": false, 00:10:37.349 "compare": false, 00:10:37.349 "compare_and_write": false, 00:10:37.349 "abort": false, 00:10:37.349 "seek_hole": false, 00:10:37.349 "seek_data": false, 00:10:37.349 "copy": false, 00:10:37.349 "nvme_iov_md": false 00:10:37.349 }, 00:10:37.349 "memory_domains": [ 00:10:37.349 { 00:10:37.349 "dma_device_id": "system", 00:10:37.349 "dma_device_type": 1 00:10:37.349 }, 00:10:37.349 { 00:10:37.349 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:37.349 "dma_device_type": 2 00:10:37.349 }, 00:10:37.349 { 00:10:37.349 "dma_device_id": "system", 00:10:37.349 "dma_device_type": 1 00:10:37.349 }, 00:10:37.349 { 00:10:37.349 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:37.349 "dma_device_type": 2 00:10:37.349 }, 00:10:37.349 { 00:10:37.349 "dma_device_id": "system", 00:10:37.349 "dma_device_type": 1 00:10:37.349 }, 00:10:37.349 { 00:10:37.349 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:37.349 "dma_device_type": 2 00:10:37.349 } 00:10:37.349 ], 00:10:37.349 "driver_specific": { 00:10:37.349 "raid": { 00:10:37.349 "uuid": "e3bda160-c159-4669-b14c-18754a67ab36", 00:10:37.349 "strip_size_kb": 0, 00:10:37.349 "state": "online", 00:10:37.349 "raid_level": "raid1", 00:10:37.349 "superblock": false, 00:10:37.349 "num_base_bdevs": 3, 00:10:37.349 "num_base_bdevs_discovered": 3, 00:10:37.349 "num_base_bdevs_operational": 3, 00:10:37.349 "base_bdevs_list": [ 00:10:37.349 { 00:10:37.349 "name": "BaseBdev1", 00:10:37.349 "uuid": "21312fb1-3307-4b49-bf28-f6310e5680e3", 00:10:37.349 "is_configured": true, 00:10:37.349 "data_offset": 0, 00:10:37.349 "data_size": 65536 00:10:37.349 }, 00:10:37.349 { 00:10:37.349 "name": "BaseBdev2", 00:10:37.349 "uuid": "9693ffa6-c8c7-463a-b0ef-ab367672fb63", 00:10:37.349 "is_configured": true, 00:10:37.349 "data_offset": 0, 00:10:37.349 "data_size": 65536 00:10:37.349 }, 00:10:37.349 { 00:10:37.349 "name": "BaseBdev3", 00:10:37.349 "uuid": "6f0a0e07-807d-4306-97e9-987946bd8201", 00:10:37.349 "is_configured": true, 00:10:37.349 "data_offset": 0, 00:10:37.349 "data_size": 65536 00:10:37.349 } 00:10:37.349 ] 00:10:37.349 } 00:10:37.349 } 00:10:37.349 }' 00:10:37.349 19:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:37.349 19:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:37.349 BaseBdev2 00:10:37.349 BaseBdev3' 00:10:37.349 19:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:37.610 19:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:37.610 19:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:37.610 19:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:37.610 19:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:37.610 19:08:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.610 19:08:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.610 19:08:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.610 19:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:37.610 19:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:37.610 19:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:37.610 19:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:37.610 19:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:37.610 19:08:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.610 19:08:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.610 19:08:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.610 19:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:37.610 19:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:37.610 19:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:37.610 19:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:37.610 19:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:37.610 19:08:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.610 19:08:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.610 19:08:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.610 19:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:37.610 19:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:37.610 19:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:37.610 19:08:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.610 19:08:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.610 [2024-11-27 19:08:47.148149] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:37.869 19:08:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.869 19:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:37.869 19:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:10:37.869 19:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:37.869 19:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:37.869 19:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:10:37.869 19:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:10:37.869 19:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:37.869 19:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:37.869 19:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:37.869 19:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:37.869 19:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:37.869 19:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:37.869 19:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:37.869 19:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:37.869 19:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:37.869 19:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:37.869 19:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:37.869 19:08:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.869 19:08:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.869 19:08:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.869 19:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:37.869 "name": "Existed_Raid", 00:10:37.869 "uuid": "e3bda160-c159-4669-b14c-18754a67ab36", 00:10:37.869 "strip_size_kb": 0, 00:10:37.869 "state": "online", 00:10:37.869 "raid_level": "raid1", 00:10:37.869 "superblock": false, 00:10:37.869 "num_base_bdevs": 3, 00:10:37.869 "num_base_bdevs_discovered": 2, 00:10:37.869 "num_base_bdevs_operational": 2, 00:10:37.869 "base_bdevs_list": [ 00:10:37.869 { 00:10:37.869 "name": null, 00:10:37.869 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:37.869 "is_configured": false, 00:10:37.869 "data_offset": 0, 00:10:37.869 "data_size": 65536 00:10:37.869 }, 00:10:37.869 { 00:10:37.869 "name": "BaseBdev2", 00:10:37.869 "uuid": "9693ffa6-c8c7-463a-b0ef-ab367672fb63", 00:10:37.869 "is_configured": true, 00:10:37.869 "data_offset": 0, 00:10:37.869 "data_size": 65536 00:10:37.869 }, 00:10:37.869 { 00:10:37.869 "name": "BaseBdev3", 00:10:37.869 "uuid": "6f0a0e07-807d-4306-97e9-987946bd8201", 00:10:37.869 "is_configured": true, 00:10:37.869 "data_offset": 0, 00:10:37.869 "data_size": 65536 00:10:37.869 } 00:10:37.869 ] 00:10:37.869 }' 00:10:37.869 19:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:37.869 19:08:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.129 19:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:38.129 19:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:38.129 19:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.129 19:08:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.129 19:08:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.129 19:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:38.129 19:08:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.129 19:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:38.129 19:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:38.129 19:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:38.129 19:08:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.129 19:08:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.129 [2024-11-27 19:08:47.737721] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:38.388 19:08:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.388 19:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:38.388 19:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:38.388 19:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:38.388 19:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.388 19:08:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.388 19:08:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.388 19:08:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.388 19:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:38.388 19:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:38.388 19:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:38.388 19:08:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.388 19:08:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.388 [2024-11-27 19:08:47.885973] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:38.388 [2024-11-27 19:08:47.886134] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:38.388 [2024-11-27 19:08:47.989449] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:38.388 [2024-11-27 19:08:47.989590] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:38.388 [2024-11-27 19:08:47.989635] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:38.388 19:08:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.388 19:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:38.388 19:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:38.388 19:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.388 19:08:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.388 19:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:38.388 19:08:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.388 19:08:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.647 19:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:38.647 19:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:38.647 19:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:10:38.647 19:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:38.647 19:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:38.647 19:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:38.647 19:08:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.647 19:08:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.647 BaseBdev2 00:10:38.647 19:08:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.647 19:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:38.647 19:08:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:38.647 19:08:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:38.647 19:08:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:38.647 19:08:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:38.647 19:08:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:38.647 19:08:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:38.647 19:08:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.647 19:08:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.647 19:08:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.647 19:08:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:38.647 19:08:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.647 19:08:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.647 [ 00:10:38.647 { 00:10:38.647 "name": "BaseBdev2", 00:10:38.647 "aliases": [ 00:10:38.647 "07ac0b27-b8f0-4bb3-99b1-f8deb9f655db" 00:10:38.647 ], 00:10:38.647 "product_name": "Malloc disk", 00:10:38.647 "block_size": 512, 00:10:38.647 "num_blocks": 65536, 00:10:38.647 "uuid": "07ac0b27-b8f0-4bb3-99b1-f8deb9f655db", 00:10:38.647 "assigned_rate_limits": { 00:10:38.647 "rw_ios_per_sec": 0, 00:10:38.647 "rw_mbytes_per_sec": 0, 00:10:38.647 "r_mbytes_per_sec": 0, 00:10:38.647 "w_mbytes_per_sec": 0 00:10:38.647 }, 00:10:38.647 "claimed": false, 00:10:38.647 "zoned": false, 00:10:38.647 "supported_io_types": { 00:10:38.647 "read": true, 00:10:38.647 "write": true, 00:10:38.647 "unmap": true, 00:10:38.647 "flush": true, 00:10:38.647 "reset": true, 00:10:38.647 "nvme_admin": false, 00:10:38.647 "nvme_io": false, 00:10:38.647 "nvme_io_md": false, 00:10:38.647 "write_zeroes": true, 00:10:38.647 "zcopy": true, 00:10:38.647 "get_zone_info": false, 00:10:38.647 "zone_management": false, 00:10:38.647 "zone_append": false, 00:10:38.647 "compare": false, 00:10:38.647 "compare_and_write": false, 00:10:38.647 "abort": true, 00:10:38.647 "seek_hole": false, 00:10:38.647 "seek_data": false, 00:10:38.647 "copy": true, 00:10:38.647 "nvme_iov_md": false 00:10:38.647 }, 00:10:38.647 "memory_domains": [ 00:10:38.647 { 00:10:38.647 "dma_device_id": "system", 00:10:38.647 "dma_device_type": 1 00:10:38.647 }, 00:10:38.647 { 00:10:38.647 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:38.647 "dma_device_type": 2 00:10:38.647 } 00:10:38.647 ], 00:10:38.648 "driver_specific": {} 00:10:38.648 } 00:10:38.648 ] 00:10:38.648 19:08:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.648 19:08:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:38.648 19:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:38.648 19:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:38.648 19:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:38.648 19:08:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.648 19:08:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.648 BaseBdev3 00:10:38.648 19:08:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.648 19:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:38.648 19:08:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:38.648 19:08:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:38.648 19:08:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:38.648 19:08:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:38.648 19:08:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:38.648 19:08:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:38.648 19:08:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.648 19:08:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.648 19:08:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.648 19:08:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:38.648 19:08:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.648 19:08:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.648 [ 00:10:38.648 { 00:10:38.648 "name": "BaseBdev3", 00:10:38.648 "aliases": [ 00:10:38.648 "e43e5d5b-9695-4fa1-bf1c-a13bda18bab1" 00:10:38.648 ], 00:10:38.648 "product_name": "Malloc disk", 00:10:38.648 "block_size": 512, 00:10:38.648 "num_blocks": 65536, 00:10:38.648 "uuid": "e43e5d5b-9695-4fa1-bf1c-a13bda18bab1", 00:10:38.648 "assigned_rate_limits": { 00:10:38.648 "rw_ios_per_sec": 0, 00:10:38.648 "rw_mbytes_per_sec": 0, 00:10:38.648 "r_mbytes_per_sec": 0, 00:10:38.648 "w_mbytes_per_sec": 0 00:10:38.648 }, 00:10:38.648 "claimed": false, 00:10:38.648 "zoned": false, 00:10:38.648 "supported_io_types": { 00:10:38.648 "read": true, 00:10:38.648 "write": true, 00:10:38.648 "unmap": true, 00:10:38.648 "flush": true, 00:10:38.648 "reset": true, 00:10:38.648 "nvme_admin": false, 00:10:38.648 "nvme_io": false, 00:10:38.648 "nvme_io_md": false, 00:10:38.648 "write_zeroes": true, 00:10:38.648 "zcopy": true, 00:10:38.648 "get_zone_info": false, 00:10:38.648 "zone_management": false, 00:10:38.648 "zone_append": false, 00:10:38.648 "compare": false, 00:10:38.648 "compare_and_write": false, 00:10:38.648 "abort": true, 00:10:38.648 "seek_hole": false, 00:10:38.648 "seek_data": false, 00:10:38.648 "copy": true, 00:10:38.648 "nvme_iov_md": false 00:10:38.648 }, 00:10:38.648 "memory_domains": [ 00:10:38.648 { 00:10:38.648 "dma_device_id": "system", 00:10:38.648 "dma_device_type": 1 00:10:38.648 }, 00:10:38.648 { 00:10:38.648 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:38.648 "dma_device_type": 2 00:10:38.648 } 00:10:38.648 ], 00:10:38.648 "driver_specific": {} 00:10:38.648 } 00:10:38.648 ] 00:10:38.648 19:08:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.648 19:08:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:38.648 19:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:38.648 19:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:38.648 19:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:38.648 19:08:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.648 19:08:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.648 [2024-11-27 19:08:48.219266] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:38.648 [2024-11-27 19:08:48.219367] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:38.648 [2024-11-27 19:08:48.219414] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:38.648 [2024-11-27 19:08:48.221544] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:38.648 19:08:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.648 19:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:38.648 19:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:38.648 19:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:38.648 19:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:38.648 19:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:38.648 19:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:38.648 19:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:38.648 19:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:38.648 19:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:38.648 19:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:38.648 19:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.648 19:08:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.648 19:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:38.648 19:08:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.648 19:08:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.648 19:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:38.648 "name": "Existed_Raid", 00:10:38.648 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:38.648 "strip_size_kb": 0, 00:10:38.648 "state": "configuring", 00:10:38.648 "raid_level": "raid1", 00:10:38.648 "superblock": false, 00:10:38.648 "num_base_bdevs": 3, 00:10:38.648 "num_base_bdevs_discovered": 2, 00:10:38.648 "num_base_bdevs_operational": 3, 00:10:38.648 "base_bdevs_list": [ 00:10:38.648 { 00:10:38.648 "name": "BaseBdev1", 00:10:38.648 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:38.648 "is_configured": false, 00:10:38.648 "data_offset": 0, 00:10:38.648 "data_size": 0 00:10:38.648 }, 00:10:38.648 { 00:10:38.648 "name": "BaseBdev2", 00:10:38.648 "uuid": "07ac0b27-b8f0-4bb3-99b1-f8deb9f655db", 00:10:38.648 "is_configured": true, 00:10:38.648 "data_offset": 0, 00:10:38.648 "data_size": 65536 00:10:38.648 }, 00:10:38.648 { 00:10:38.648 "name": "BaseBdev3", 00:10:38.648 "uuid": "e43e5d5b-9695-4fa1-bf1c-a13bda18bab1", 00:10:38.648 "is_configured": true, 00:10:38.648 "data_offset": 0, 00:10:38.648 "data_size": 65536 00:10:38.648 } 00:10:38.648 ] 00:10:38.648 }' 00:10:38.648 19:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:38.648 19:08:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.217 19:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:39.217 19:08:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.217 19:08:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.217 [2024-11-27 19:08:48.698499] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:39.217 19:08:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.217 19:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:39.217 19:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:39.217 19:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:39.217 19:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:39.217 19:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:39.217 19:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:39.217 19:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:39.217 19:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:39.217 19:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:39.217 19:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:39.217 19:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.217 19:08:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.217 19:08:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.217 19:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:39.217 19:08:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.217 19:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:39.217 "name": "Existed_Raid", 00:10:39.217 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:39.217 "strip_size_kb": 0, 00:10:39.217 "state": "configuring", 00:10:39.217 "raid_level": "raid1", 00:10:39.217 "superblock": false, 00:10:39.217 "num_base_bdevs": 3, 00:10:39.217 "num_base_bdevs_discovered": 1, 00:10:39.217 "num_base_bdevs_operational": 3, 00:10:39.217 "base_bdevs_list": [ 00:10:39.217 { 00:10:39.217 "name": "BaseBdev1", 00:10:39.217 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:39.217 "is_configured": false, 00:10:39.217 "data_offset": 0, 00:10:39.217 "data_size": 0 00:10:39.217 }, 00:10:39.217 { 00:10:39.217 "name": null, 00:10:39.217 "uuid": "07ac0b27-b8f0-4bb3-99b1-f8deb9f655db", 00:10:39.217 "is_configured": false, 00:10:39.217 "data_offset": 0, 00:10:39.217 "data_size": 65536 00:10:39.217 }, 00:10:39.217 { 00:10:39.217 "name": "BaseBdev3", 00:10:39.217 "uuid": "e43e5d5b-9695-4fa1-bf1c-a13bda18bab1", 00:10:39.217 "is_configured": true, 00:10:39.217 "data_offset": 0, 00:10:39.217 "data_size": 65536 00:10:39.217 } 00:10:39.217 ] 00:10:39.217 }' 00:10:39.217 19:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:39.217 19:08:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.783 19:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.783 19:08:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.783 19:08:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.783 19:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:39.783 19:08:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.783 19:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:39.783 19:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:39.783 19:08:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.783 19:08:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.783 [2024-11-27 19:08:49.232619] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:39.783 BaseBdev1 00:10:39.783 19:08:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.783 19:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:39.783 19:08:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:39.783 19:08:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:39.783 19:08:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:39.783 19:08:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:39.783 19:08:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:39.783 19:08:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:39.783 19:08:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.783 19:08:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.783 19:08:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.783 19:08:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:39.783 19:08:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.783 19:08:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.783 [ 00:10:39.783 { 00:10:39.783 "name": "BaseBdev1", 00:10:39.783 "aliases": [ 00:10:39.783 "0fa964c2-ab31-4fce-a122-6e5b9ae64b7a" 00:10:39.783 ], 00:10:39.783 "product_name": "Malloc disk", 00:10:39.783 "block_size": 512, 00:10:39.783 "num_blocks": 65536, 00:10:39.783 "uuid": "0fa964c2-ab31-4fce-a122-6e5b9ae64b7a", 00:10:39.783 "assigned_rate_limits": { 00:10:39.783 "rw_ios_per_sec": 0, 00:10:39.783 "rw_mbytes_per_sec": 0, 00:10:39.783 "r_mbytes_per_sec": 0, 00:10:39.783 "w_mbytes_per_sec": 0 00:10:39.783 }, 00:10:39.783 "claimed": true, 00:10:39.783 "claim_type": "exclusive_write", 00:10:39.783 "zoned": false, 00:10:39.783 "supported_io_types": { 00:10:39.783 "read": true, 00:10:39.783 "write": true, 00:10:39.783 "unmap": true, 00:10:39.783 "flush": true, 00:10:39.783 "reset": true, 00:10:39.783 "nvme_admin": false, 00:10:39.783 "nvme_io": false, 00:10:39.783 "nvme_io_md": false, 00:10:39.783 "write_zeroes": true, 00:10:39.783 "zcopy": true, 00:10:39.783 "get_zone_info": false, 00:10:39.783 "zone_management": false, 00:10:39.783 "zone_append": false, 00:10:39.783 "compare": false, 00:10:39.783 "compare_and_write": false, 00:10:39.783 "abort": true, 00:10:39.783 "seek_hole": false, 00:10:39.783 "seek_data": false, 00:10:39.783 "copy": true, 00:10:39.783 "nvme_iov_md": false 00:10:39.783 }, 00:10:39.783 "memory_domains": [ 00:10:39.783 { 00:10:39.783 "dma_device_id": "system", 00:10:39.783 "dma_device_type": 1 00:10:39.783 }, 00:10:39.783 { 00:10:39.783 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:39.783 "dma_device_type": 2 00:10:39.783 } 00:10:39.783 ], 00:10:39.783 "driver_specific": {} 00:10:39.783 } 00:10:39.783 ] 00:10:39.783 19:08:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.783 19:08:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:39.783 19:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:39.783 19:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:39.783 19:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:39.783 19:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:39.783 19:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:39.783 19:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:39.783 19:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:39.783 19:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:39.783 19:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:39.783 19:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:39.783 19:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.783 19:08:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.783 19:08:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.783 19:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:39.783 19:08:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.783 19:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:39.783 "name": "Existed_Raid", 00:10:39.783 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:39.783 "strip_size_kb": 0, 00:10:39.783 "state": "configuring", 00:10:39.783 "raid_level": "raid1", 00:10:39.783 "superblock": false, 00:10:39.783 "num_base_bdevs": 3, 00:10:39.783 "num_base_bdevs_discovered": 2, 00:10:39.783 "num_base_bdevs_operational": 3, 00:10:39.783 "base_bdevs_list": [ 00:10:39.783 { 00:10:39.783 "name": "BaseBdev1", 00:10:39.783 "uuid": "0fa964c2-ab31-4fce-a122-6e5b9ae64b7a", 00:10:39.783 "is_configured": true, 00:10:39.783 "data_offset": 0, 00:10:39.783 "data_size": 65536 00:10:39.783 }, 00:10:39.783 { 00:10:39.783 "name": null, 00:10:39.783 "uuid": "07ac0b27-b8f0-4bb3-99b1-f8deb9f655db", 00:10:39.783 "is_configured": false, 00:10:39.783 "data_offset": 0, 00:10:39.783 "data_size": 65536 00:10:39.783 }, 00:10:39.783 { 00:10:39.783 "name": "BaseBdev3", 00:10:39.783 "uuid": "e43e5d5b-9695-4fa1-bf1c-a13bda18bab1", 00:10:39.783 "is_configured": true, 00:10:39.783 "data_offset": 0, 00:10:39.783 "data_size": 65536 00:10:39.783 } 00:10:39.783 ] 00:10:39.783 }' 00:10:39.783 19:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:39.783 19:08:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.350 19:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:40.350 19:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.350 19:08:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.350 19:08:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.350 19:08:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.350 19:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:40.350 19:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:40.350 19:08:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.350 19:08:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.350 [2024-11-27 19:08:49.743766] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:40.350 19:08:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.350 19:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:40.350 19:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:40.350 19:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:40.350 19:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:40.350 19:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:40.350 19:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:40.350 19:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:40.350 19:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:40.350 19:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:40.350 19:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:40.350 19:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.350 19:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:40.350 19:08:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.350 19:08:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.350 19:08:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.350 19:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:40.350 "name": "Existed_Raid", 00:10:40.350 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:40.350 "strip_size_kb": 0, 00:10:40.350 "state": "configuring", 00:10:40.350 "raid_level": "raid1", 00:10:40.350 "superblock": false, 00:10:40.350 "num_base_bdevs": 3, 00:10:40.350 "num_base_bdevs_discovered": 1, 00:10:40.350 "num_base_bdevs_operational": 3, 00:10:40.350 "base_bdevs_list": [ 00:10:40.350 { 00:10:40.350 "name": "BaseBdev1", 00:10:40.350 "uuid": "0fa964c2-ab31-4fce-a122-6e5b9ae64b7a", 00:10:40.350 "is_configured": true, 00:10:40.350 "data_offset": 0, 00:10:40.350 "data_size": 65536 00:10:40.350 }, 00:10:40.350 { 00:10:40.350 "name": null, 00:10:40.350 "uuid": "07ac0b27-b8f0-4bb3-99b1-f8deb9f655db", 00:10:40.350 "is_configured": false, 00:10:40.350 "data_offset": 0, 00:10:40.350 "data_size": 65536 00:10:40.350 }, 00:10:40.350 { 00:10:40.350 "name": null, 00:10:40.350 "uuid": "e43e5d5b-9695-4fa1-bf1c-a13bda18bab1", 00:10:40.350 "is_configured": false, 00:10:40.350 "data_offset": 0, 00:10:40.350 "data_size": 65536 00:10:40.350 } 00:10:40.350 ] 00:10:40.350 }' 00:10:40.350 19:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:40.350 19:08:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.609 19:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:40.609 19:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.609 19:08:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.609 19:08:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.609 19:08:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.609 19:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:40.609 19:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:40.609 19:08:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.609 19:08:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.609 [2024-11-27 19:08:50.227005] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:40.609 19:08:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.609 19:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:40.609 19:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:40.609 19:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:40.609 19:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:40.609 19:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:40.609 19:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:40.609 19:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:40.609 19:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:40.609 19:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:40.609 19:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:40.609 19:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.609 19:08:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.609 19:08:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.609 19:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:40.869 19:08:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.869 19:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:40.869 "name": "Existed_Raid", 00:10:40.869 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:40.869 "strip_size_kb": 0, 00:10:40.869 "state": "configuring", 00:10:40.869 "raid_level": "raid1", 00:10:40.869 "superblock": false, 00:10:40.869 "num_base_bdevs": 3, 00:10:40.869 "num_base_bdevs_discovered": 2, 00:10:40.869 "num_base_bdevs_operational": 3, 00:10:40.869 "base_bdevs_list": [ 00:10:40.869 { 00:10:40.869 "name": "BaseBdev1", 00:10:40.869 "uuid": "0fa964c2-ab31-4fce-a122-6e5b9ae64b7a", 00:10:40.869 "is_configured": true, 00:10:40.869 "data_offset": 0, 00:10:40.869 "data_size": 65536 00:10:40.869 }, 00:10:40.869 { 00:10:40.869 "name": null, 00:10:40.869 "uuid": "07ac0b27-b8f0-4bb3-99b1-f8deb9f655db", 00:10:40.869 "is_configured": false, 00:10:40.869 "data_offset": 0, 00:10:40.869 "data_size": 65536 00:10:40.869 }, 00:10:40.869 { 00:10:40.869 "name": "BaseBdev3", 00:10:40.869 "uuid": "e43e5d5b-9695-4fa1-bf1c-a13bda18bab1", 00:10:40.869 "is_configured": true, 00:10:40.869 "data_offset": 0, 00:10:40.869 "data_size": 65536 00:10:40.869 } 00:10:40.869 ] 00:10:40.869 }' 00:10:40.869 19:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:40.869 19:08:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.129 19:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:41.129 19:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:41.129 19:08:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.129 19:08:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.129 19:08:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.129 19:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:41.129 19:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:41.129 19:08:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.129 19:08:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.129 [2024-11-27 19:08:50.678263] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:41.389 19:08:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.389 19:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:41.389 19:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:41.389 19:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:41.389 19:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:41.389 19:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:41.389 19:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:41.389 19:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:41.389 19:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:41.389 19:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:41.389 19:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:41.389 19:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:41.389 19:08:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.389 19:08:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.389 19:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:41.389 19:08:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.389 19:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:41.389 "name": "Existed_Raid", 00:10:41.389 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:41.389 "strip_size_kb": 0, 00:10:41.389 "state": "configuring", 00:10:41.389 "raid_level": "raid1", 00:10:41.389 "superblock": false, 00:10:41.389 "num_base_bdevs": 3, 00:10:41.389 "num_base_bdevs_discovered": 1, 00:10:41.389 "num_base_bdevs_operational": 3, 00:10:41.389 "base_bdevs_list": [ 00:10:41.389 { 00:10:41.389 "name": null, 00:10:41.389 "uuid": "0fa964c2-ab31-4fce-a122-6e5b9ae64b7a", 00:10:41.389 "is_configured": false, 00:10:41.389 "data_offset": 0, 00:10:41.389 "data_size": 65536 00:10:41.389 }, 00:10:41.389 { 00:10:41.389 "name": null, 00:10:41.389 "uuid": "07ac0b27-b8f0-4bb3-99b1-f8deb9f655db", 00:10:41.389 "is_configured": false, 00:10:41.389 "data_offset": 0, 00:10:41.389 "data_size": 65536 00:10:41.389 }, 00:10:41.389 { 00:10:41.389 "name": "BaseBdev3", 00:10:41.389 "uuid": "e43e5d5b-9695-4fa1-bf1c-a13bda18bab1", 00:10:41.389 "is_configured": true, 00:10:41.389 "data_offset": 0, 00:10:41.389 "data_size": 65536 00:10:41.389 } 00:10:41.389 ] 00:10:41.389 }' 00:10:41.389 19:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:41.389 19:08:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.649 19:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:41.649 19:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.649 19:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.649 19:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:41.649 19:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.649 19:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:41.649 19:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:41.649 19:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.649 19:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.649 [2024-11-27 19:08:51.231225] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:41.649 19:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.649 19:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:41.649 19:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:41.649 19:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:41.649 19:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:41.649 19:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:41.649 19:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:41.649 19:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:41.649 19:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:41.649 19:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:41.649 19:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:41.649 19:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:41.649 19:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:41.649 19:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.649 19:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.649 19:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.649 19:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:41.649 "name": "Existed_Raid", 00:10:41.649 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:41.649 "strip_size_kb": 0, 00:10:41.649 "state": "configuring", 00:10:41.649 "raid_level": "raid1", 00:10:41.649 "superblock": false, 00:10:41.649 "num_base_bdevs": 3, 00:10:41.649 "num_base_bdevs_discovered": 2, 00:10:41.649 "num_base_bdevs_operational": 3, 00:10:41.649 "base_bdevs_list": [ 00:10:41.649 { 00:10:41.649 "name": null, 00:10:41.649 "uuid": "0fa964c2-ab31-4fce-a122-6e5b9ae64b7a", 00:10:41.649 "is_configured": false, 00:10:41.649 "data_offset": 0, 00:10:41.649 "data_size": 65536 00:10:41.649 }, 00:10:41.649 { 00:10:41.649 "name": "BaseBdev2", 00:10:41.649 "uuid": "07ac0b27-b8f0-4bb3-99b1-f8deb9f655db", 00:10:41.649 "is_configured": true, 00:10:41.649 "data_offset": 0, 00:10:41.649 "data_size": 65536 00:10:41.649 }, 00:10:41.649 { 00:10:41.649 "name": "BaseBdev3", 00:10:41.649 "uuid": "e43e5d5b-9695-4fa1-bf1c-a13bda18bab1", 00:10:41.649 "is_configured": true, 00:10:41.649 "data_offset": 0, 00:10:41.649 "data_size": 65536 00:10:41.649 } 00:10:41.649 ] 00:10:41.649 }' 00:10:41.649 19:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:41.649 19:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.219 19:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:42.219 19:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:42.219 19:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.219 19:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.219 19:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.219 19:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:42.219 19:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:42.219 19:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.219 19:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.219 19:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:42.219 19:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.219 19:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 0fa964c2-ab31-4fce-a122-6e5b9ae64b7a 00:10:42.219 19:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.219 19:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.219 [2024-11-27 19:08:51.810512] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:42.219 [2024-11-27 19:08:51.810654] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:42.219 [2024-11-27 19:08:51.810681] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:10:42.219 NewBaseBdev 00:10:42.219 [2024-11-27 19:08:51.811066] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:42.219 [2024-11-27 19:08:51.811269] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:42.219 [2024-11-27 19:08:51.811283] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:42.219 [2024-11-27 19:08:51.811575] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:42.219 19:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.219 19:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:42.219 19:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:10:42.219 19:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:42.219 19:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:42.219 19:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:42.219 19:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:42.219 19:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:42.219 19:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.219 19:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.219 19:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.219 19:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:42.219 19:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.219 19:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.219 [ 00:10:42.219 { 00:10:42.219 "name": "NewBaseBdev", 00:10:42.219 "aliases": [ 00:10:42.219 "0fa964c2-ab31-4fce-a122-6e5b9ae64b7a" 00:10:42.219 ], 00:10:42.219 "product_name": "Malloc disk", 00:10:42.219 "block_size": 512, 00:10:42.219 "num_blocks": 65536, 00:10:42.219 "uuid": "0fa964c2-ab31-4fce-a122-6e5b9ae64b7a", 00:10:42.219 "assigned_rate_limits": { 00:10:42.219 "rw_ios_per_sec": 0, 00:10:42.219 "rw_mbytes_per_sec": 0, 00:10:42.219 "r_mbytes_per_sec": 0, 00:10:42.219 "w_mbytes_per_sec": 0 00:10:42.219 }, 00:10:42.219 "claimed": true, 00:10:42.219 "claim_type": "exclusive_write", 00:10:42.219 "zoned": false, 00:10:42.219 "supported_io_types": { 00:10:42.219 "read": true, 00:10:42.219 "write": true, 00:10:42.219 "unmap": true, 00:10:42.219 "flush": true, 00:10:42.219 "reset": true, 00:10:42.219 "nvme_admin": false, 00:10:42.219 "nvme_io": false, 00:10:42.219 "nvme_io_md": false, 00:10:42.219 "write_zeroes": true, 00:10:42.219 "zcopy": true, 00:10:42.219 "get_zone_info": false, 00:10:42.219 "zone_management": false, 00:10:42.219 "zone_append": false, 00:10:42.219 "compare": false, 00:10:42.219 "compare_and_write": false, 00:10:42.219 "abort": true, 00:10:42.219 "seek_hole": false, 00:10:42.219 "seek_data": false, 00:10:42.219 "copy": true, 00:10:42.219 "nvme_iov_md": false 00:10:42.219 }, 00:10:42.219 "memory_domains": [ 00:10:42.219 { 00:10:42.219 "dma_device_id": "system", 00:10:42.219 "dma_device_type": 1 00:10:42.219 }, 00:10:42.219 { 00:10:42.219 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:42.219 "dma_device_type": 2 00:10:42.219 } 00:10:42.219 ], 00:10:42.219 "driver_specific": {} 00:10:42.219 } 00:10:42.219 ] 00:10:42.219 19:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.219 19:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:42.219 19:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:10:42.219 19:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:42.219 19:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:42.219 19:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:42.219 19:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:42.219 19:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:42.219 19:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:42.219 19:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:42.219 19:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:42.219 19:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:42.479 19:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:42.479 19:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.479 19:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.479 19:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:42.479 19:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.479 19:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:42.479 "name": "Existed_Raid", 00:10:42.479 "uuid": "0913f609-a0ab-422b-bebe-680ea63e2dfc", 00:10:42.479 "strip_size_kb": 0, 00:10:42.479 "state": "online", 00:10:42.479 "raid_level": "raid1", 00:10:42.479 "superblock": false, 00:10:42.479 "num_base_bdevs": 3, 00:10:42.479 "num_base_bdevs_discovered": 3, 00:10:42.479 "num_base_bdevs_operational": 3, 00:10:42.479 "base_bdevs_list": [ 00:10:42.479 { 00:10:42.479 "name": "NewBaseBdev", 00:10:42.479 "uuid": "0fa964c2-ab31-4fce-a122-6e5b9ae64b7a", 00:10:42.479 "is_configured": true, 00:10:42.479 "data_offset": 0, 00:10:42.479 "data_size": 65536 00:10:42.479 }, 00:10:42.479 { 00:10:42.479 "name": "BaseBdev2", 00:10:42.479 "uuid": "07ac0b27-b8f0-4bb3-99b1-f8deb9f655db", 00:10:42.479 "is_configured": true, 00:10:42.479 "data_offset": 0, 00:10:42.479 "data_size": 65536 00:10:42.479 }, 00:10:42.479 { 00:10:42.479 "name": "BaseBdev3", 00:10:42.479 "uuid": "e43e5d5b-9695-4fa1-bf1c-a13bda18bab1", 00:10:42.479 "is_configured": true, 00:10:42.479 "data_offset": 0, 00:10:42.479 "data_size": 65536 00:10:42.479 } 00:10:42.479 ] 00:10:42.479 }' 00:10:42.479 19:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:42.479 19:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.739 19:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:42.739 19:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:42.739 19:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:42.739 19:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:42.739 19:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:42.739 19:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:42.739 19:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:42.739 19:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.739 19:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.739 19:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:42.739 [2024-11-27 19:08:52.306099] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:42.739 19:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.739 19:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:42.739 "name": "Existed_Raid", 00:10:42.739 "aliases": [ 00:10:42.739 "0913f609-a0ab-422b-bebe-680ea63e2dfc" 00:10:42.739 ], 00:10:42.739 "product_name": "Raid Volume", 00:10:42.739 "block_size": 512, 00:10:42.739 "num_blocks": 65536, 00:10:42.739 "uuid": "0913f609-a0ab-422b-bebe-680ea63e2dfc", 00:10:42.739 "assigned_rate_limits": { 00:10:42.739 "rw_ios_per_sec": 0, 00:10:42.739 "rw_mbytes_per_sec": 0, 00:10:42.739 "r_mbytes_per_sec": 0, 00:10:42.739 "w_mbytes_per_sec": 0 00:10:42.739 }, 00:10:42.739 "claimed": false, 00:10:42.739 "zoned": false, 00:10:42.739 "supported_io_types": { 00:10:42.739 "read": true, 00:10:42.739 "write": true, 00:10:42.739 "unmap": false, 00:10:42.739 "flush": false, 00:10:42.739 "reset": true, 00:10:42.739 "nvme_admin": false, 00:10:42.739 "nvme_io": false, 00:10:42.739 "nvme_io_md": false, 00:10:42.739 "write_zeroes": true, 00:10:42.739 "zcopy": false, 00:10:42.739 "get_zone_info": false, 00:10:42.739 "zone_management": false, 00:10:42.739 "zone_append": false, 00:10:42.739 "compare": false, 00:10:42.739 "compare_and_write": false, 00:10:42.739 "abort": false, 00:10:42.739 "seek_hole": false, 00:10:42.739 "seek_data": false, 00:10:42.739 "copy": false, 00:10:42.739 "nvme_iov_md": false 00:10:42.739 }, 00:10:42.739 "memory_domains": [ 00:10:42.739 { 00:10:42.739 "dma_device_id": "system", 00:10:42.739 "dma_device_type": 1 00:10:42.739 }, 00:10:42.739 { 00:10:42.739 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:42.739 "dma_device_type": 2 00:10:42.739 }, 00:10:42.739 { 00:10:42.739 "dma_device_id": "system", 00:10:42.739 "dma_device_type": 1 00:10:42.739 }, 00:10:42.739 { 00:10:42.739 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:42.739 "dma_device_type": 2 00:10:42.739 }, 00:10:42.739 { 00:10:42.739 "dma_device_id": "system", 00:10:42.739 "dma_device_type": 1 00:10:42.739 }, 00:10:42.739 { 00:10:42.739 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:42.739 "dma_device_type": 2 00:10:42.739 } 00:10:42.739 ], 00:10:42.739 "driver_specific": { 00:10:42.739 "raid": { 00:10:42.739 "uuid": "0913f609-a0ab-422b-bebe-680ea63e2dfc", 00:10:42.739 "strip_size_kb": 0, 00:10:42.739 "state": "online", 00:10:42.739 "raid_level": "raid1", 00:10:42.739 "superblock": false, 00:10:42.739 "num_base_bdevs": 3, 00:10:42.739 "num_base_bdevs_discovered": 3, 00:10:42.739 "num_base_bdevs_operational": 3, 00:10:42.739 "base_bdevs_list": [ 00:10:42.739 { 00:10:42.739 "name": "NewBaseBdev", 00:10:42.739 "uuid": "0fa964c2-ab31-4fce-a122-6e5b9ae64b7a", 00:10:42.739 "is_configured": true, 00:10:42.739 "data_offset": 0, 00:10:42.739 "data_size": 65536 00:10:42.739 }, 00:10:42.739 { 00:10:42.739 "name": "BaseBdev2", 00:10:42.739 "uuid": "07ac0b27-b8f0-4bb3-99b1-f8deb9f655db", 00:10:42.739 "is_configured": true, 00:10:42.739 "data_offset": 0, 00:10:42.739 "data_size": 65536 00:10:42.739 }, 00:10:42.739 { 00:10:42.739 "name": "BaseBdev3", 00:10:42.739 "uuid": "e43e5d5b-9695-4fa1-bf1c-a13bda18bab1", 00:10:42.739 "is_configured": true, 00:10:42.739 "data_offset": 0, 00:10:42.739 "data_size": 65536 00:10:42.739 } 00:10:42.739 ] 00:10:42.739 } 00:10:42.739 } 00:10:42.739 }' 00:10:42.739 19:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:42.999 19:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:42.999 BaseBdev2 00:10:42.999 BaseBdev3' 00:10:42.999 19:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:42.999 19:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:42.999 19:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:42.999 19:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:42.999 19:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:42.999 19:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.999 19:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.999 19:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.000 19:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:43.000 19:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:43.000 19:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:43.000 19:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:43.000 19:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:43.000 19:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.000 19:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.000 19:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.000 19:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:43.000 19:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:43.000 19:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:43.000 19:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:43.000 19:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.000 19:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:43.000 19:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.000 19:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.000 19:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:43.000 19:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:43.000 19:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:43.000 19:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.000 19:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.000 [2024-11-27 19:08:52.581279] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:43.000 [2024-11-27 19:08:52.581359] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:43.000 [2024-11-27 19:08:52.581476] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:43.000 [2024-11-27 19:08:52.581829] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:43.000 [2024-11-27 19:08:52.581886] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:43.000 19:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.000 19:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 67494 00:10:43.000 19:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 67494 ']' 00:10:43.000 19:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 67494 00:10:43.000 19:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:10:43.000 19:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:43.000 19:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67494 00:10:43.000 19:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:43.000 19:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:43.000 19:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67494' 00:10:43.000 killing process with pid 67494 00:10:43.000 19:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 67494 00:10:43.000 [2024-11-27 19:08:52.631763] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:43.000 19:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 67494 00:10:43.570 [2024-11-27 19:08:52.969574] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:45.018 19:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:10:45.018 00:10:45.018 real 0m10.753s 00:10:45.018 user 0m16.679s 00:10:45.018 sys 0m2.040s 00:10:45.018 19:08:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:45.018 19:08:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.018 ************************************ 00:10:45.018 END TEST raid_state_function_test 00:10:45.018 ************************************ 00:10:45.018 19:08:54 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:10:45.018 19:08:54 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:45.018 19:08:54 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:45.018 19:08:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:45.018 ************************************ 00:10:45.018 START TEST raid_state_function_test_sb 00:10:45.018 ************************************ 00:10:45.018 19:08:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 3 true 00:10:45.018 19:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:10:45.018 19:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:10:45.018 19:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:10:45.018 19:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:45.018 19:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:45.018 19:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:45.018 19:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:45.018 19:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:45.018 19:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:45.018 19:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:45.018 19:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:45.018 19:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:45.018 19:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:45.018 19:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:45.018 19:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:45.018 19:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:45.018 19:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:45.018 19:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:45.018 19:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:45.018 19:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:45.018 19:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:45.018 19:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:10:45.018 19:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:10:45.018 19:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:10:45.018 19:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:10:45.018 19:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=68121 00:10:45.018 19:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:45.018 19:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 68121' 00:10:45.018 Process raid pid: 68121 00:10:45.019 19:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 68121 00:10:45.019 19:08:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 68121 ']' 00:10:45.019 19:08:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:45.019 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:45.019 19:08:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:45.019 19:08:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:45.019 19:08:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:45.019 19:08:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.019 [2024-11-27 19:08:54.404139] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:10:45.019 [2024-11-27 19:08:54.404414] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:45.019 [2024-11-27 19:08:54.583995] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:45.279 [2024-11-27 19:08:54.730184] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:45.538 [2024-11-27 19:08:54.980410] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:45.538 [2024-11-27 19:08:54.980522] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:45.798 19:08:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:45.798 19:08:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:10:45.798 19:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:45.798 19:08:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.798 19:08:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.798 [2024-11-27 19:08:55.273257] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:45.798 [2024-11-27 19:08:55.273365] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:45.798 [2024-11-27 19:08:55.273402] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:45.798 [2024-11-27 19:08:55.273427] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:45.798 [2024-11-27 19:08:55.273445] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:45.798 [2024-11-27 19:08:55.273482] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:45.798 19:08:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.798 19:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:45.798 19:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:45.798 19:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:45.798 19:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:45.798 19:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:45.798 19:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:45.798 19:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:45.798 19:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:45.798 19:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:45.798 19:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:45.798 19:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.798 19:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:45.798 19:08:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.798 19:08:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.798 19:08:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.798 19:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:45.798 "name": "Existed_Raid", 00:10:45.798 "uuid": "d22aba06-d13c-4bc9-931c-0e2a7ba7e3ee", 00:10:45.798 "strip_size_kb": 0, 00:10:45.798 "state": "configuring", 00:10:45.798 "raid_level": "raid1", 00:10:45.798 "superblock": true, 00:10:45.798 "num_base_bdevs": 3, 00:10:45.798 "num_base_bdevs_discovered": 0, 00:10:45.798 "num_base_bdevs_operational": 3, 00:10:45.798 "base_bdevs_list": [ 00:10:45.798 { 00:10:45.798 "name": "BaseBdev1", 00:10:45.798 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:45.798 "is_configured": false, 00:10:45.798 "data_offset": 0, 00:10:45.798 "data_size": 0 00:10:45.798 }, 00:10:45.798 { 00:10:45.798 "name": "BaseBdev2", 00:10:45.798 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:45.798 "is_configured": false, 00:10:45.798 "data_offset": 0, 00:10:45.798 "data_size": 0 00:10:45.798 }, 00:10:45.798 { 00:10:45.798 "name": "BaseBdev3", 00:10:45.798 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:45.798 "is_configured": false, 00:10:45.798 "data_offset": 0, 00:10:45.798 "data_size": 0 00:10:45.798 } 00:10:45.798 ] 00:10:45.798 }' 00:10:45.798 19:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:45.798 19:08:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.368 19:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:46.368 19:08:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.368 19:08:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.368 [2024-11-27 19:08:55.724481] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:46.368 [2024-11-27 19:08:55.724574] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:46.368 19:08:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.368 19:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:46.368 19:08:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.368 19:08:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.368 [2024-11-27 19:08:55.736451] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:46.368 [2024-11-27 19:08:55.736542] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:46.368 [2024-11-27 19:08:55.736572] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:46.368 [2024-11-27 19:08:55.736597] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:46.368 [2024-11-27 19:08:55.736616] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:46.368 [2024-11-27 19:08:55.736629] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:46.368 19:08:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.369 19:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:46.369 19:08:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.369 19:08:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.369 [2024-11-27 19:08:55.793526] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:46.369 BaseBdev1 00:10:46.369 19:08:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.369 19:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:46.369 19:08:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:46.369 19:08:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:46.369 19:08:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:46.369 19:08:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:46.369 19:08:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:46.369 19:08:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:46.369 19:08:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.369 19:08:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.369 19:08:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.369 19:08:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:46.369 19:08:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.369 19:08:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.369 [ 00:10:46.369 { 00:10:46.369 "name": "BaseBdev1", 00:10:46.369 "aliases": [ 00:10:46.369 "074eea1c-9e34-4e8f-bdfe-6a2d38da8fee" 00:10:46.369 ], 00:10:46.369 "product_name": "Malloc disk", 00:10:46.369 "block_size": 512, 00:10:46.369 "num_blocks": 65536, 00:10:46.369 "uuid": "074eea1c-9e34-4e8f-bdfe-6a2d38da8fee", 00:10:46.369 "assigned_rate_limits": { 00:10:46.369 "rw_ios_per_sec": 0, 00:10:46.369 "rw_mbytes_per_sec": 0, 00:10:46.369 "r_mbytes_per_sec": 0, 00:10:46.369 "w_mbytes_per_sec": 0 00:10:46.369 }, 00:10:46.369 "claimed": true, 00:10:46.369 "claim_type": "exclusive_write", 00:10:46.369 "zoned": false, 00:10:46.369 "supported_io_types": { 00:10:46.369 "read": true, 00:10:46.369 "write": true, 00:10:46.369 "unmap": true, 00:10:46.369 "flush": true, 00:10:46.369 "reset": true, 00:10:46.369 "nvme_admin": false, 00:10:46.369 "nvme_io": false, 00:10:46.369 "nvme_io_md": false, 00:10:46.369 "write_zeroes": true, 00:10:46.369 "zcopy": true, 00:10:46.369 "get_zone_info": false, 00:10:46.369 "zone_management": false, 00:10:46.369 "zone_append": false, 00:10:46.369 "compare": false, 00:10:46.369 "compare_and_write": false, 00:10:46.369 "abort": true, 00:10:46.369 "seek_hole": false, 00:10:46.369 "seek_data": false, 00:10:46.369 "copy": true, 00:10:46.369 "nvme_iov_md": false 00:10:46.369 }, 00:10:46.369 "memory_domains": [ 00:10:46.369 { 00:10:46.369 "dma_device_id": "system", 00:10:46.369 "dma_device_type": 1 00:10:46.369 }, 00:10:46.369 { 00:10:46.369 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:46.369 "dma_device_type": 2 00:10:46.369 } 00:10:46.369 ], 00:10:46.369 "driver_specific": {} 00:10:46.369 } 00:10:46.369 ] 00:10:46.369 19:08:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.369 19:08:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:46.369 19:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:46.369 19:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:46.369 19:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:46.369 19:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:46.369 19:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:46.369 19:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:46.369 19:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:46.369 19:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:46.369 19:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:46.369 19:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:46.369 19:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:46.369 19:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:46.369 19:08:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.369 19:08:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.369 19:08:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.369 19:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:46.369 "name": "Existed_Raid", 00:10:46.369 "uuid": "823ef108-f2d4-498b-9253-fcff06b2a1d1", 00:10:46.369 "strip_size_kb": 0, 00:10:46.369 "state": "configuring", 00:10:46.369 "raid_level": "raid1", 00:10:46.369 "superblock": true, 00:10:46.369 "num_base_bdevs": 3, 00:10:46.369 "num_base_bdevs_discovered": 1, 00:10:46.369 "num_base_bdevs_operational": 3, 00:10:46.369 "base_bdevs_list": [ 00:10:46.369 { 00:10:46.369 "name": "BaseBdev1", 00:10:46.369 "uuid": "074eea1c-9e34-4e8f-bdfe-6a2d38da8fee", 00:10:46.369 "is_configured": true, 00:10:46.369 "data_offset": 2048, 00:10:46.369 "data_size": 63488 00:10:46.369 }, 00:10:46.369 { 00:10:46.369 "name": "BaseBdev2", 00:10:46.369 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:46.369 "is_configured": false, 00:10:46.369 "data_offset": 0, 00:10:46.369 "data_size": 0 00:10:46.369 }, 00:10:46.369 { 00:10:46.369 "name": "BaseBdev3", 00:10:46.369 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:46.369 "is_configured": false, 00:10:46.369 "data_offset": 0, 00:10:46.369 "data_size": 0 00:10:46.369 } 00:10:46.369 ] 00:10:46.369 }' 00:10:46.369 19:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:46.369 19:08:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.629 19:08:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:46.629 19:08:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.629 19:08:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.629 [2024-11-27 19:08:56.252798] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:46.629 [2024-11-27 19:08:56.252911] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:46.629 19:08:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.629 19:08:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:46.629 19:08:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.629 19:08:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.889 [2024-11-27 19:08:56.264841] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:46.889 [2024-11-27 19:08:56.267300] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:46.889 [2024-11-27 19:08:56.267392] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:46.889 [2024-11-27 19:08:56.267430] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:46.889 [2024-11-27 19:08:56.267456] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:46.889 19:08:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.889 19:08:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:46.889 19:08:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:46.889 19:08:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:46.889 19:08:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:46.889 19:08:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:46.889 19:08:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:46.889 19:08:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:46.889 19:08:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:46.889 19:08:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:46.889 19:08:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:46.890 19:08:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:46.890 19:08:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:46.890 19:08:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:46.890 19:08:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:46.890 19:08:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.890 19:08:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.890 19:08:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.890 19:08:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:46.890 "name": "Existed_Raid", 00:10:46.890 "uuid": "80916428-7d5f-4db7-b7e5-5340de87731a", 00:10:46.890 "strip_size_kb": 0, 00:10:46.890 "state": "configuring", 00:10:46.890 "raid_level": "raid1", 00:10:46.890 "superblock": true, 00:10:46.890 "num_base_bdevs": 3, 00:10:46.890 "num_base_bdevs_discovered": 1, 00:10:46.890 "num_base_bdevs_operational": 3, 00:10:46.890 "base_bdevs_list": [ 00:10:46.890 { 00:10:46.890 "name": "BaseBdev1", 00:10:46.890 "uuid": "074eea1c-9e34-4e8f-bdfe-6a2d38da8fee", 00:10:46.890 "is_configured": true, 00:10:46.890 "data_offset": 2048, 00:10:46.890 "data_size": 63488 00:10:46.890 }, 00:10:46.890 { 00:10:46.890 "name": "BaseBdev2", 00:10:46.890 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:46.890 "is_configured": false, 00:10:46.890 "data_offset": 0, 00:10:46.890 "data_size": 0 00:10:46.890 }, 00:10:46.890 { 00:10:46.890 "name": "BaseBdev3", 00:10:46.890 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:46.890 "is_configured": false, 00:10:46.890 "data_offset": 0, 00:10:46.890 "data_size": 0 00:10:46.890 } 00:10:46.890 ] 00:10:46.890 }' 00:10:46.890 19:08:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:46.890 19:08:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.150 19:08:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:47.150 19:08:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.150 19:08:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.150 [2024-11-27 19:08:56.727739] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:47.150 BaseBdev2 00:10:47.150 19:08:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.150 19:08:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:47.150 19:08:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:47.150 19:08:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:47.150 19:08:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:47.150 19:08:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:47.150 19:08:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:47.150 19:08:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:47.150 19:08:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.150 19:08:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.150 19:08:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.150 19:08:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:47.150 19:08:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.150 19:08:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.150 [ 00:10:47.150 { 00:10:47.150 "name": "BaseBdev2", 00:10:47.150 "aliases": [ 00:10:47.150 "f9389557-8e45-4ff0-85c7-43058ec70c6c" 00:10:47.150 ], 00:10:47.150 "product_name": "Malloc disk", 00:10:47.150 "block_size": 512, 00:10:47.150 "num_blocks": 65536, 00:10:47.150 "uuid": "f9389557-8e45-4ff0-85c7-43058ec70c6c", 00:10:47.150 "assigned_rate_limits": { 00:10:47.150 "rw_ios_per_sec": 0, 00:10:47.150 "rw_mbytes_per_sec": 0, 00:10:47.150 "r_mbytes_per_sec": 0, 00:10:47.150 "w_mbytes_per_sec": 0 00:10:47.150 }, 00:10:47.150 "claimed": true, 00:10:47.150 "claim_type": "exclusive_write", 00:10:47.150 "zoned": false, 00:10:47.150 "supported_io_types": { 00:10:47.150 "read": true, 00:10:47.150 "write": true, 00:10:47.150 "unmap": true, 00:10:47.150 "flush": true, 00:10:47.150 "reset": true, 00:10:47.150 "nvme_admin": false, 00:10:47.150 "nvme_io": false, 00:10:47.150 "nvme_io_md": false, 00:10:47.150 "write_zeroes": true, 00:10:47.150 "zcopy": true, 00:10:47.150 "get_zone_info": false, 00:10:47.150 "zone_management": false, 00:10:47.150 "zone_append": false, 00:10:47.150 "compare": false, 00:10:47.150 "compare_and_write": false, 00:10:47.150 "abort": true, 00:10:47.150 "seek_hole": false, 00:10:47.150 "seek_data": false, 00:10:47.150 "copy": true, 00:10:47.150 "nvme_iov_md": false 00:10:47.150 }, 00:10:47.150 "memory_domains": [ 00:10:47.150 { 00:10:47.150 "dma_device_id": "system", 00:10:47.150 "dma_device_type": 1 00:10:47.150 }, 00:10:47.150 { 00:10:47.150 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:47.150 "dma_device_type": 2 00:10:47.150 } 00:10:47.150 ], 00:10:47.150 "driver_specific": {} 00:10:47.150 } 00:10:47.150 ] 00:10:47.150 19:08:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.150 19:08:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:47.150 19:08:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:47.150 19:08:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:47.150 19:08:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:47.150 19:08:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:47.150 19:08:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:47.150 19:08:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:47.150 19:08:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:47.150 19:08:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:47.150 19:08:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:47.150 19:08:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:47.150 19:08:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:47.150 19:08:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:47.150 19:08:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:47.150 19:08:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:47.150 19:08:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.150 19:08:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.410 19:08:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.410 19:08:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:47.410 "name": "Existed_Raid", 00:10:47.410 "uuid": "80916428-7d5f-4db7-b7e5-5340de87731a", 00:10:47.410 "strip_size_kb": 0, 00:10:47.410 "state": "configuring", 00:10:47.410 "raid_level": "raid1", 00:10:47.410 "superblock": true, 00:10:47.410 "num_base_bdevs": 3, 00:10:47.410 "num_base_bdevs_discovered": 2, 00:10:47.410 "num_base_bdevs_operational": 3, 00:10:47.410 "base_bdevs_list": [ 00:10:47.410 { 00:10:47.410 "name": "BaseBdev1", 00:10:47.410 "uuid": "074eea1c-9e34-4e8f-bdfe-6a2d38da8fee", 00:10:47.410 "is_configured": true, 00:10:47.410 "data_offset": 2048, 00:10:47.410 "data_size": 63488 00:10:47.410 }, 00:10:47.410 { 00:10:47.410 "name": "BaseBdev2", 00:10:47.410 "uuid": "f9389557-8e45-4ff0-85c7-43058ec70c6c", 00:10:47.410 "is_configured": true, 00:10:47.410 "data_offset": 2048, 00:10:47.410 "data_size": 63488 00:10:47.410 }, 00:10:47.410 { 00:10:47.410 "name": "BaseBdev3", 00:10:47.410 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:47.410 "is_configured": false, 00:10:47.410 "data_offset": 0, 00:10:47.410 "data_size": 0 00:10:47.410 } 00:10:47.410 ] 00:10:47.410 }' 00:10:47.410 19:08:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:47.410 19:08:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.669 19:08:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:47.669 19:08:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.669 19:08:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.669 [2024-11-27 19:08:57.285722] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:47.669 [2024-11-27 19:08:57.286151] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:47.669 [2024-11-27 19:08:57.286215] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:47.669 [2024-11-27 19:08:57.286560] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:47.669 [2024-11-27 19:08:57.286801] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:47.669 [2024-11-27 19:08:57.286844] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:47.669 BaseBdev3 00:10:47.669 [2024-11-27 19:08:57.287075] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:47.669 19:08:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.669 19:08:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:47.669 19:08:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:47.669 19:08:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:47.669 19:08:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:47.669 19:08:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:47.669 19:08:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:47.669 19:08:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:47.669 19:08:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.669 19:08:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.669 19:08:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.669 19:08:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:47.669 19:08:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.669 19:08:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.928 [ 00:10:47.928 { 00:10:47.928 "name": "BaseBdev3", 00:10:47.928 "aliases": [ 00:10:47.928 "2a27c153-ffd9-4d08-9275-9d42d4f60e8c" 00:10:47.928 ], 00:10:47.928 "product_name": "Malloc disk", 00:10:47.928 "block_size": 512, 00:10:47.928 "num_blocks": 65536, 00:10:47.928 "uuid": "2a27c153-ffd9-4d08-9275-9d42d4f60e8c", 00:10:47.928 "assigned_rate_limits": { 00:10:47.928 "rw_ios_per_sec": 0, 00:10:47.928 "rw_mbytes_per_sec": 0, 00:10:47.928 "r_mbytes_per_sec": 0, 00:10:47.928 "w_mbytes_per_sec": 0 00:10:47.928 }, 00:10:47.928 "claimed": true, 00:10:47.928 "claim_type": "exclusive_write", 00:10:47.928 "zoned": false, 00:10:47.928 "supported_io_types": { 00:10:47.928 "read": true, 00:10:47.928 "write": true, 00:10:47.928 "unmap": true, 00:10:47.928 "flush": true, 00:10:47.928 "reset": true, 00:10:47.928 "nvme_admin": false, 00:10:47.928 "nvme_io": false, 00:10:47.928 "nvme_io_md": false, 00:10:47.928 "write_zeroes": true, 00:10:47.928 "zcopy": true, 00:10:47.928 "get_zone_info": false, 00:10:47.928 "zone_management": false, 00:10:47.928 "zone_append": false, 00:10:47.928 "compare": false, 00:10:47.928 "compare_and_write": false, 00:10:47.928 "abort": true, 00:10:47.928 "seek_hole": false, 00:10:47.928 "seek_data": false, 00:10:47.928 "copy": true, 00:10:47.928 "nvme_iov_md": false 00:10:47.928 }, 00:10:47.928 "memory_domains": [ 00:10:47.928 { 00:10:47.928 "dma_device_id": "system", 00:10:47.928 "dma_device_type": 1 00:10:47.928 }, 00:10:47.928 { 00:10:47.928 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:47.928 "dma_device_type": 2 00:10:47.928 } 00:10:47.928 ], 00:10:47.928 "driver_specific": {} 00:10:47.928 } 00:10:47.928 ] 00:10:47.928 19:08:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.928 19:08:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:47.928 19:08:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:47.928 19:08:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:47.928 19:08:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:10:47.928 19:08:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:47.928 19:08:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:47.928 19:08:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:47.928 19:08:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:47.928 19:08:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:47.928 19:08:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:47.928 19:08:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:47.928 19:08:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:47.928 19:08:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:47.928 19:08:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:47.928 19:08:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:47.928 19:08:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.928 19:08:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.928 19:08:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.928 19:08:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:47.928 "name": "Existed_Raid", 00:10:47.928 "uuid": "80916428-7d5f-4db7-b7e5-5340de87731a", 00:10:47.928 "strip_size_kb": 0, 00:10:47.928 "state": "online", 00:10:47.928 "raid_level": "raid1", 00:10:47.928 "superblock": true, 00:10:47.928 "num_base_bdevs": 3, 00:10:47.928 "num_base_bdevs_discovered": 3, 00:10:47.928 "num_base_bdevs_operational": 3, 00:10:47.928 "base_bdevs_list": [ 00:10:47.928 { 00:10:47.928 "name": "BaseBdev1", 00:10:47.928 "uuid": "074eea1c-9e34-4e8f-bdfe-6a2d38da8fee", 00:10:47.928 "is_configured": true, 00:10:47.928 "data_offset": 2048, 00:10:47.928 "data_size": 63488 00:10:47.928 }, 00:10:47.928 { 00:10:47.928 "name": "BaseBdev2", 00:10:47.928 "uuid": "f9389557-8e45-4ff0-85c7-43058ec70c6c", 00:10:47.928 "is_configured": true, 00:10:47.928 "data_offset": 2048, 00:10:47.928 "data_size": 63488 00:10:47.928 }, 00:10:47.928 { 00:10:47.928 "name": "BaseBdev3", 00:10:47.928 "uuid": "2a27c153-ffd9-4d08-9275-9d42d4f60e8c", 00:10:47.928 "is_configured": true, 00:10:47.928 "data_offset": 2048, 00:10:47.928 "data_size": 63488 00:10:47.928 } 00:10:47.928 ] 00:10:47.928 }' 00:10:47.928 19:08:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:47.928 19:08:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.188 19:08:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:48.188 19:08:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:48.188 19:08:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:48.188 19:08:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:48.188 19:08:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:48.188 19:08:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:48.188 19:08:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:48.188 19:08:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:48.188 19:08:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.188 19:08:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.188 [2024-11-27 19:08:57.769281] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:48.188 19:08:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.188 19:08:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:48.188 "name": "Existed_Raid", 00:10:48.188 "aliases": [ 00:10:48.188 "80916428-7d5f-4db7-b7e5-5340de87731a" 00:10:48.188 ], 00:10:48.188 "product_name": "Raid Volume", 00:10:48.188 "block_size": 512, 00:10:48.188 "num_blocks": 63488, 00:10:48.188 "uuid": "80916428-7d5f-4db7-b7e5-5340de87731a", 00:10:48.188 "assigned_rate_limits": { 00:10:48.188 "rw_ios_per_sec": 0, 00:10:48.188 "rw_mbytes_per_sec": 0, 00:10:48.188 "r_mbytes_per_sec": 0, 00:10:48.188 "w_mbytes_per_sec": 0 00:10:48.188 }, 00:10:48.188 "claimed": false, 00:10:48.188 "zoned": false, 00:10:48.188 "supported_io_types": { 00:10:48.188 "read": true, 00:10:48.188 "write": true, 00:10:48.188 "unmap": false, 00:10:48.188 "flush": false, 00:10:48.188 "reset": true, 00:10:48.188 "nvme_admin": false, 00:10:48.188 "nvme_io": false, 00:10:48.188 "nvme_io_md": false, 00:10:48.188 "write_zeroes": true, 00:10:48.188 "zcopy": false, 00:10:48.188 "get_zone_info": false, 00:10:48.188 "zone_management": false, 00:10:48.188 "zone_append": false, 00:10:48.188 "compare": false, 00:10:48.188 "compare_and_write": false, 00:10:48.188 "abort": false, 00:10:48.188 "seek_hole": false, 00:10:48.188 "seek_data": false, 00:10:48.188 "copy": false, 00:10:48.188 "nvme_iov_md": false 00:10:48.188 }, 00:10:48.188 "memory_domains": [ 00:10:48.188 { 00:10:48.188 "dma_device_id": "system", 00:10:48.188 "dma_device_type": 1 00:10:48.188 }, 00:10:48.188 { 00:10:48.188 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:48.188 "dma_device_type": 2 00:10:48.188 }, 00:10:48.188 { 00:10:48.188 "dma_device_id": "system", 00:10:48.188 "dma_device_type": 1 00:10:48.188 }, 00:10:48.188 { 00:10:48.188 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:48.188 "dma_device_type": 2 00:10:48.188 }, 00:10:48.188 { 00:10:48.188 "dma_device_id": "system", 00:10:48.188 "dma_device_type": 1 00:10:48.188 }, 00:10:48.188 { 00:10:48.188 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:48.188 "dma_device_type": 2 00:10:48.188 } 00:10:48.188 ], 00:10:48.188 "driver_specific": { 00:10:48.188 "raid": { 00:10:48.188 "uuid": "80916428-7d5f-4db7-b7e5-5340de87731a", 00:10:48.188 "strip_size_kb": 0, 00:10:48.188 "state": "online", 00:10:48.188 "raid_level": "raid1", 00:10:48.188 "superblock": true, 00:10:48.188 "num_base_bdevs": 3, 00:10:48.188 "num_base_bdevs_discovered": 3, 00:10:48.188 "num_base_bdevs_operational": 3, 00:10:48.188 "base_bdevs_list": [ 00:10:48.188 { 00:10:48.188 "name": "BaseBdev1", 00:10:48.188 "uuid": "074eea1c-9e34-4e8f-bdfe-6a2d38da8fee", 00:10:48.188 "is_configured": true, 00:10:48.188 "data_offset": 2048, 00:10:48.188 "data_size": 63488 00:10:48.188 }, 00:10:48.188 { 00:10:48.188 "name": "BaseBdev2", 00:10:48.188 "uuid": "f9389557-8e45-4ff0-85c7-43058ec70c6c", 00:10:48.188 "is_configured": true, 00:10:48.188 "data_offset": 2048, 00:10:48.188 "data_size": 63488 00:10:48.188 }, 00:10:48.188 { 00:10:48.188 "name": "BaseBdev3", 00:10:48.188 "uuid": "2a27c153-ffd9-4d08-9275-9d42d4f60e8c", 00:10:48.188 "is_configured": true, 00:10:48.188 "data_offset": 2048, 00:10:48.188 "data_size": 63488 00:10:48.188 } 00:10:48.188 ] 00:10:48.188 } 00:10:48.188 } 00:10:48.188 }' 00:10:48.188 19:08:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:48.451 19:08:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:48.451 BaseBdev2 00:10:48.451 BaseBdev3' 00:10:48.451 19:08:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:48.451 19:08:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:48.451 19:08:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:48.451 19:08:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:48.451 19:08:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:48.451 19:08:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.451 19:08:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.451 19:08:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.451 19:08:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:48.451 19:08:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:48.451 19:08:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:48.451 19:08:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:48.451 19:08:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.451 19:08:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.451 19:08:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:48.451 19:08:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.451 19:08:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:48.451 19:08:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:48.451 19:08:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:48.451 19:08:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:48.451 19:08:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.451 19:08:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.451 19:08:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:48.451 19:08:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.451 19:08:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:48.451 19:08:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:48.451 19:08:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:48.451 19:08:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.451 19:08:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.451 [2024-11-27 19:08:58.052545] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:48.712 19:08:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.712 19:08:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:48.712 19:08:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:10:48.712 19:08:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:48.712 19:08:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:10:48.712 19:08:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:10:48.712 19:08:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:10:48.712 19:08:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:48.712 19:08:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:48.712 19:08:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:48.712 19:08:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:48.712 19:08:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:48.712 19:08:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:48.712 19:08:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:48.712 19:08:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:48.712 19:08:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:48.712 19:08:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.712 19:08:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.712 19:08:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.712 19:08:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:48.712 19:08:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.712 19:08:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:48.712 "name": "Existed_Raid", 00:10:48.712 "uuid": "80916428-7d5f-4db7-b7e5-5340de87731a", 00:10:48.712 "strip_size_kb": 0, 00:10:48.712 "state": "online", 00:10:48.712 "raid_level": "raid1", 00:10:48.712 "superblock": true, 00:10:48.712 "num_base_bdevs": 3, 00:10:48.712 "num_base_bdevs_discovered": 2, 00:10:48.712 "num_base_bdevs_operational": 2, 00:10:48.712 "base_bdevs_list": [ 00:10:48.712 { 00:10:48.712 "name": null, 00:10:48.712 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:48.712 "is_configured": false, 00:10:48.712 "data_offset": 0, 00:10:48.712 "data_size": 63488 00:10:48.712 }, 00:10:48.712 { 00:10:48.712 "name": "BaseBdev2", 00:10:48.712 "uuid": "f9389557-8e45-4ff0-85c7-43058ec70c6c", 00:10:48.712 "is_configured": true, 00:10:48.712 "data_offset": 2048, 00:10:48.712 "data_size": 63488 00:10:48.712 }, 00:10:48.712 { 00:10:48.712 "name": "BaseBdev3", 00:10:48.712 "uuid": "2a27c153-ffd9-4d08-9275-9d42d4f60e8c", 00:10:48.712 "is_configured": true, 00:10:48.712 "data_offset": 2048, 00:10:48.712 "data_size": 63488 00:10:48.712 } 00:10:48.712 ] 00:10:48.712 }' 00:10:48.712 19:08:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:48.712 19:08:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.972 19:08:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:48.972 19:08:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:48.972 19:08:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.972 19:08:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:48.972 19:08:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.972 19:08:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.233 19:08:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.233 19:08:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:49.233 19:08:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:49.233 19:08:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:49.233 19:08:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.233 19:08:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.233 [2024-11-27 19:08:58.637745] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:49.233 19:08:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.233 19:08:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:49.233 19:08:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:49.233 19:08:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:49.233 19:08:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:49.233 19:08:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.233 19:08:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.233 19:08:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.233 19:08:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:49.233 19:08:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:49.233 19:08:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:49.233 19:08:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.233 19:08:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.233 [2024-11-27 19:08:58.804215] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:49.233 [2024-11-27 19:08:58.804391] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:49.493 [2024-11-27 19:08:58.912371] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:49.493 [2024-11-27 19:08:58.912584] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:49.493 [2024-11-27 19:08:58.912630] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:49.493 19:08:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.494 19:08:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:49.494 19:08:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:49.494 19:08:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:49.494 19:08:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:49.494 19:08:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.494 19:08:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.494 19:08:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.494 19:08:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:49.494 19:08:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:49.494 19:08:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:10:49.494 19:08:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:49.494 19:08:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:49.494 19:08:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:49.494 19:08:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.494 19:08:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.494 BaseBdev2 00:10:49.494 19:08:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.494 19:08:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:49.494 19:08:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:49.494 19:08:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:49.494 19:08:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:49.494 19:08:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:49.494 19:08:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:49.494 19:08:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:49.494 19:08:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.494 19:08:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.494 19:08:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.494 19:08:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:49.494 19:08:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.494 19:08:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.494 [ 00:10:49.494 { 00:10:49.494 "name": "BaseBdev2", 00:10:49.494 "aliases": [ 00:10:49.494 "f16a3544-63ee-4e80-871b-41cd0aa21421" 00:10:49.494 ], 00:10:49.494 "product_name": "Malloc disk", 00:10:49.494 "block_size": 512, 00:10:49.494 "num_blocks": 65536, 00:10:49.494 "uuid": "f16a3544-63ee-4e80-871b-41cd0aa21421", 00:10:49.494 "assigned_rate_limits": { 00:10:49.494 "rw_ios_per_sec": 0, 00:10:49.494 "rw_mbytes_per_sec": 0, 00:10:49.494 "r_mbytes_per_sec": 0, 00:10:49.494 "w_mbytes_per_sec": 0 00:10:49.494 }, 00:10:49.494 "claimed": false, 00:10:49.494 "zoned": false, 00:10:49.494 "supported_io_types": { 00:10:49.494 "read": true, 00:10:49.494 "write": true, 00:10:49.494 "unmap": true, 00:10:49.494 "flush": true, 00:10:49.494 "reset": true, 00:10:49.494 "nvme_admin": false, 00:10:49.494 "nvme_io": false, 00:10:49.494 "nvme_io_md": false, 00:10:49.494 "write_zeroes": true, 00:10:49.494 "zcopy": true, 00:10:49.494 "get_zone_info": false, 00:10:49.494 "zone_management": false, 00:10:49.494 "zone_append": false, 00:10:49.494 "compare": false, 00:10:49.494 "compare_and_write": false, 00:10:49.494 "abort": true, 00:10:49.494 "seek_hole": false, 00:10:49.494 "seek_data": false, 00:10:49.494 "copy": true, 00:10:49.494 "nvme_iov_md": false 00:10:49.494 }, 00:10:49.494 "memory_domains": [ 00:10:49.494 { 00:10:49.494 "dma_device_id": "system", 00:10:49.494 "dma_device_type": 1 00:10:49.494 }, 00:10:49.494 { 00:10:49.494 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:49.494 "dma_device_type": 2 00:10:49.494 } 00:10:49.494 ], 00:10:49.494 "driver_specific": {} 00:10:49.494 } 00:10:49.494 ] 00:10:49.494 19:08:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.494 19:08:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:49.494 19:08:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:49.494 19:08:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:49.494 19:08:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:49.494 19:08:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.494 19:08:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.494 BaseBdev3 00:10:49.494 19:08:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.494 19:08:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:49.494 19:08:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:49.494 19:08:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:49.494 19:08:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:49.494 19:08:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:49.494 19:08:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:49.494 19:08:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:49.494 19:08:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.494 19:08:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.494 19:08:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.494 19:08:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:49.494 19:08:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.494 19:08:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.753 [ 00:10:49.753 { 00:10:49.753 "name": "BaseBdev3", 00:10:49.753 "aliases": [ 00:10:49.753 "56e616a7-76f2-46b4-9fa8-b421544a4ae0" 00:10:49.753 ], 00:10:49.753 "product_name": "Malloc disk", 00:10:49.753 "block_size": 512, 00:10:49.753 "num_blocks": 65536, 00:10:49.753 "uuid": "56e616a7-76f2-46b4-9fa8-b421544a4ae0", 00:10:49.753 "assigned_rate_limits": { 00:10:49.753 "rw_ios_per_sec": 0, 00:10:49.753 "rw_mbytes_per_sec": 0, 00:10:49.753 "r_mbytes_per_sec": 0, 00:10:49.753 "w_mbytes_per_sec": 0 00:10:49.753 }, 00:10:49.753 "claimed": false, 00:10:49.753 "zoned": false, 00:10:49.753 "supported_io_types": { 00:10:49.753 "read": true, 00:10:49.753 "write": true, 00:10:49.753 "unmap": true, 00:10:49.753 "flush": true, 00:10:49.753 "reset": true, 00:10:49.753 "nvme_admin": false, 00:10:49.753 "nvme_io": false, 00:10:49.753 "nvme_io_md": false, 00:10:49.753 "write_zeroes": true, 00:10:49.753 "zcopy": true, 00:10:49.753 "get_zone_info": false, 00:10:49.753 "zone_management": false, 00:10:49.753 "zone_append": false, 00:10:49.753 "compare": false, 00:10:49.753 "compare_and_write": false, 00:10:49.753 "abort": true, 00:10:49.753 "seek_hole": false, 00:10:49.753 "seek_data": false, 00:10:49.753 "copy": true, 00:10:49.754 "nvme_iov_md": false 00:10:49.754 }, 00:10:49.754 "memory_domains": [ 00:10:49.754 { 00:10:49.754 "dma_device_id": "system", 00:10:49.754 "dma_device_type": 1 00:10:49.754 }, 00:10:49.754 { 00:10:49.754 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:49.754 "dma_device_type": 2 00:10:49.754 } 00:10:49.754 ], 00:10:49.754 "driver_specific": {} 00:10:49.754 } 00:10:49.754 ] 00:10:49.754 19:08:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.754 19:08:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:49.754 19:08:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:49.754 19:08:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:49.754 19:08:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:49.754 19:08:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.754 19:08:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.754 [2024-11-27 19:08:59.157012] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:49.754 [2024-11-27 19:08:59.157107] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:49.754 [2024-11-27 19:08:59.157149] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:49.754 [2024-11-27 19:08:59.159392] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:49.754 19:08:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.754 19:08:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:49.754 19:08:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:49.754 19:08:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:49.754 19:08:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:49.754 19:08:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:49.754 19:08:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:49.754 19:08:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:49.754 19:08:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:49.754 19:08:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:49.754 19:08:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:49.754 19:08:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:49.754 19:08:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:49.754 19:08:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.754 19:08:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.754 19:08:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.754 19:08:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:49.754 "name": "Existed_Raid", 00:10:49.754 "uuid": "e217e5a3-1e07-45e3-9994-da537ee347b2", 00:10:49.754 "strip_size_kb": 0, 00:10:49.754 "state": "configuring", 00:10:49.754 "raid_level": "raid1", 00:10:49.754 "superblock": true, 00:10:49.754 "num_base_bdevs": 3, 00:10:49.754 "num_base_bdevs_discovered": 2, 00:10:49.754 "num_base_bdevs_operational": 3, 00:10:49.754 "base_bdevs_list": [ 00:10:49.754 { 00:10:49.754 "name": "BaseBdev1", 00:10:49.754 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:49.754 "is_configured": false, 00:10:49.754 "data_offset": 0, 00:10:49.754 "data_size": 0 00:10:49.754 }, 00:10:49.754 { 00:10:49.754 "name": "BaseBdev2", 00:10:49.754 "uuid": "f16a3544-63ee-4e80-871b-41cd0aa21421", 00:10:49.754 "is_configured": true, 00:10:49.754 "data_offset": 2048, 00:10:49.754 "data_size": 63488 00:10:49.754 }, 00:10:49.754 { 00:10:49.754 "name": "BaseBdev3", 00:10:49.754 "uuid": "56e616a7-76f2-46b4-9fa8-b421544a4ae0", 00:10:49.754 "is_configured": true, 00:10:49.754 "data_offset": 2048, 00:10:49.754 "data_size": 63488 00:10:49.754 } 00:10:49.754 ] 00:10:49.754 }' 00:10:49.754 19:08:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:49.754 19:08:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.015 19:08:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:50.015 19:08:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.015 19:08:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.015 [2024-11-27 19:08:59.556378] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:50.015 19:08:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.015 19:08:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:50.015 19:08:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:50.015 19:08:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:50.015 19:08:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:50.015 19:08:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:50.015 19:08:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:50.015 19:08:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:50.015 19:08:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:50.015 19:08:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:50.015 19:08:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:50.015 19:08:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:50.015 19:08:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:50.015 19:08:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.015 19:08:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.015 19:08:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.015 19:08:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:50.015 "name": "Existed_Raid", 00:10:50.015 "uuid": "e217e5a3-1e07-45e3-9994-da537ee347b2", 00:10:50.015 "strip_size_kb": 0, 00:10:50.015 "state": "configuring", 00:10:50.015 "raid_level": "raid1", 00:10:50.015 "superblock": true, 00:10:50.015 "num_base_bdevs": 3, 00:10:50.015 "num_base_bdevs_discovered": 1, 00:10:50.015 "num_base_bdevs_operational": 3, 00:10:50.015 "base_bdevs_list": [ 00:10:50.015 { 00:10:50.015 "name": "BaseBdev1", 00:10:50.015 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:50.015 "is_configured": false, 00:10:50.015 "data_offset": 0, 00:10:50.015 "data_size": 0 00:10:50.015 }, 00:10:50.015 { 00:10:50.015 "name": null, 00:10:50.015 "uuid": "f16a3544-63ee-4e80-871b-41cd0aa21421", 00:10:50.015 "is_configured": false, 00:10:50.015 "data_offset": 0, 00:10:50.015 "data_size": 63488 00:10:50.015 }, 00:10:50.015 { 00:10:50.015 "name": "BaseBdev3", 00:10:50.015 "uuid": "56e616a7-76f2-46b4-9fa8-b421544a4ae0", 00:10:50.015 "is_configured": true, 00:10:50.015 "data_offset": 2048, 00:10:50.015 "data_size": 63488 00:10:50.015 } 00:10:50.015 ] 00:10:50.015 }' 00:10:50.015 19:08:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:50.015 19:08:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.585 19:08:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:50.585 19:08:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:50.585 19:08:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.585 19:08:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.585 19:09:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.585 19:09:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:50.585 19:09:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:50.585 19:09:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.585 19:09:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.585 [2024-11-27 19:09:00.087841] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:50.585 BaseBdev1 00:10:50.585 19:09:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.585 19:09:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:50.585 19:09:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:50.585 19:09:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:50.585 19:09:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:50.585 19:09:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:50.585 19:09:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:50.585 19:09:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:50.585 19:09:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.585 19:09:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.585 19:09:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.585 19:09:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:50.585 19:09:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.585 19:09:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.585 [ 00:10:50.585 { 00:10:50.585 "name": "BaseBdev1", 00:10:50.585 "aliases": [ 00:10:50.585 "61f3c748-9bb2-4163-a264-b401c00730af" 00:10:50.585 ], 00:10:50.585 "product_name": "Malloc disk", 00:10:50.585 "block_size": 512, 00:10:50.585 "num_blocks": 65536, 00:10:50.585 "uuid": "61f3c748-9bb2-4163-a264-b401c00730af", 00:10:50.585 "assigned_rate_limits": { 00:10:50.585 "rw_ios_per_sec": 0, 00:10:50.585 "rw_mbytes_per_sec": 0, 00:10:50.585 "r_mbytes_per_sec": 0, 00:10:50.585 "w_mbytes_per_sec": 0 00:10:50.585 }, 00:10:50.585 "claimed": true, 00:10:50.585 "claim_type": "exclusive_write", 00:10:50.585 "zoned": false, 00:10:50.585 "supported_io_types": { 00:10:50.585 "read": true, 00:10:50.585 "write": true, 00:10:50.585 "unmap": true, 00:10:50.585 "flush": true, 00:10:50.585 "reset": true, 00:10:50.585 "nvme_admin": false, 00:10:50.585 "nvme_io": false, 00:10:50.585 "nvme_io_md": false, 00:10:50.585 "write_zeroes": true, 00:10:50.585 "zcopy": true, 00:10:50.585 "get_zone_info": false, 00:10:50.585 "zone_management": false, 00:10:50.585 "zone_append": false, 00:10:50.585 "compare": false, 00:10:50.586 "compare_and_write": false, 00:10:50.586 "abort": true, 00:10:50.586 "seek_hole": false, 00:10:50.586 "seek_data": false, 00:10:50.586 "copy": true, 00:10:50.586 "nvme_iov_md": false 00:10:50.586 }, 00:10:50.586 "memory_domains": [ 00:10:50.586 { 00:10:50.586 "dma_device_id": "system", 00:10:50.586 "dma_device_type": 1 00:10:50.586 }, 00:10:50.586 { 00:10:50.586 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:50.586 "dma_device_type": 2 00:10:50.586 } 00:10:50.586 ], 00:10:50.586 "driver_specific": {} 00:10:50.586 } 00:10:50.586 ] 00:10:50.586 19:09:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.586 19:09:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:50.586 19:09:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:50.586 19:09:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:50.586 19:09:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:50.586 19:09:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:50.586 19:09:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:50.586 19:09:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:50.586 19:09:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:50.586 19:09:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:50.586 19:09:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:50.586 19:09:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:50.586 19:09:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:50.586 19:09:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:50.586 19:09:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.586 19:09:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.586 19:09:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.586 19:09:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:50.586 "name": "Existed_Raid", 00:10:50.586 "uuid": "e217e5a3-1e07-45e3-9994-da537ee347b2", 00:10:50.586 "strip_size_kb": 0, 00:10:50.586 "state": "configuring", 00:10:50.586 "raid_level": "raid1", 00:10:50.586 "superblock": true, 00:10:50.586 "num_base_bdevs": 3, 00:10:50.586 "num_base_bdevs_discovered": 2, 00:10:50.586 "num_base_bdevs_operational": 3, 00:10:50.586 "base_bdevs_list": [ 00:10:50.586 { 00:10:50.586 "name": "BaseBdev1", 00:10:50.586 "uuid": "61f3c748-9bb2-4163-a264-b401c00730af", 00:10:50.586 "is_configured": true, 00:10:50.586 "data_offset": 2048, 00:10:50.586 "data_size": 63488 00:10:50.586 }, 00:10:50.586 { 00:10:50.586 "name": null, 00:10:50.586 "uuid": "f16a3544-63ee-4e80-871b-41cd0aa21421", 00:10:50.586 "is_configured": false, 00:10:50.586 "data_offset": 0, 00:10:50.586 "data_size": 63488 00:10:50.586 }, 00:10:50.586 { 00:10:50.586 "name": "BaseBdev3", 00:10:50.586 "uuid": "56e616a7-76f2-46b4-9fa8-b421544a4ae0", 00:10:50.586 "is_configured": true, 00:10:50.586 "data_offset": 2048, 00:10:50.586 "data_size": 63488 00:10:50.586 } 00:10:50.586 ] 00:10:50.586 }' 00:10:50.586 19:09:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:50.586 19:09:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.155 19:09:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:51.155 19:09:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:51.155 19:09:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.155 19:09:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.155 19:09:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.155 19:09:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:51.155 19:09:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:51.155 19:09:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.155 19:09:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.155 [2024-11-27 19:09:00.602993] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:51.155 19:09:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.155 19:09:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:51.155 19:09:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:51.155 19:09:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:51.155 19:09:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:51.155 19:09:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:51.155 19:09:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:51.155 19:09:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:51.155 19:09:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:51.155 19:09:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:51.155 19:09:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:51.155 19:09:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:51.155 19:09:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:51.155 19:09:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.155 19:09:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.155 19:09:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.155 19:09:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:51.155 "name": "Existed_Raid", 00:10:51.156 "uuid": "e217e5a3-1e07-45e3-9994-da537ee347b2", 00:10:51.156 "strip_size_kb": 0, 00:10:51.156 "state": "configuring", 00:10:51.156 "raid_level": "raid1", 00:10:51.156 "superblock": true, 00:10:51.156 "num_base_bdevs": 3, 00:10:51.156 "num_base_bdevs_discovered": 1, 00:10:51.156 "num_base_bdevs_operational": 3, 00:10:51.156 "base_bdevs_list": [ 00:10:51.156 { 00:10:51.156 "name": "BaseBdev1", 00:10:51.156 "uuid": "61f3c748-9bb2-4163-a264-b401c00730af", 00:10:51.156 "is_configured": true, 00:10:51.156 "data_offset": 2048, 00:10:51.156 "data_size": 63488 00:10:51.156 }, 00:10:51.156 { 00:10:51.156 "name": null, 00:10:51.156 "uuid": "f16a3544-63ee-4e80-871b-41cd0aa21421", 00:10:51.156 "is_configured": false, 00:10:51.156 "data_offset": 0, 00:10:51.156 "data_size": 63488 00:10:51.156 }, 00:10:51.156 { 00:10:51.156 "name": null, 00:10:51.156 "uuid": "56e616a7-76f2-46b4-9fa8-b421544a4ae0", 00:10:51.156 "is_configured": false, 00:10:51.156 "data_offset": 0, 00:10:51.156 "data_size": 63488 00:10:51.156 } 00:10:51.156 ] 00:10:51.156 }' 00:10:51.156 19:09:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:51.156 19:09:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.414 19:09:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:51.414 19:09:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:51.414 19:09:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.414 19:09:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.673 19:09:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.673 19:09:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:51.673 19:09:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:51.673 19:09:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.673 19:09:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.673 [2024-11-27 19:09:01.058248] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:51.673 19:09:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.673 19:09:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:51.673 19:09:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:51.673 19:09:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:51.673 19:09:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:51.673 19:09:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:51.673 19:09:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:51.673 19:09:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:51.673 19:09:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:51.673 19:09:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:51.673 19:09:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:51.673 19:09:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:51.673 19:09:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.673 19:09:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.673 19:09:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:51.673 19:09:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.673 19:09:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:51.673 "name": "Existed_Raid", 00:10:51.673 "uuid": "e217e5a3-1e07-45e3-9994-da537ee347b2", 00:10:51.673 "strip_size_kb": 0, 00:10:51.673 "state": "configuring", 00:10:51.673 "raid_level": "raid1", 00:10:51.673 "superblock": true, 00:10:51.673 "num_base_bdevs": 3, 00:10:51.673 "num_base_bdevs_discovered": 2, 00:10:51.673 "num_base_bdevs_operational": 3, 00:10:51.673 "base_bdevs_list": [ 00:10:51.673 { 00:10:51.673 "name": "BaseBdev1", 00:10:51.673 "uuid": "61f3c748-9bb2-4163-a264-b401c00730af", 00:10:51.673 "is_configured": true, 00:10:51.673 "data_offset": 2048, 00:10:51.673 "data_size": 63488 00:10:51.673 }, 00:10:51.673 { 00:10:51.673 "name": null, 00:10:51.673 "uuid": "f16a3544-63ee-4e80-871b-41cd0aa21421", 00:10:51.673 "is_configured": false, 00:10:51.673 "data_offset": 0, 00:10:51.673 "data_size": 63488 00:10:51.673 }, 00:10:51.673 { 00:10:51.673 "name": "BaseBdev3", 00:10:51.673 "uuid": "56e616a7-76f2-46b4-9fa8-b421544a4ae0", 00:10:51.673 "is_configured": true, 00:10:51.673 "data_offset": 2048, 00:10:51.673 "data_size": 63488 00:10:51.673 } 00:10:51.673 ] 00:10:51.673 }' 00:10:51.673 19:09:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:51.673 19:09:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.932 19:09:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:51.932 19:09:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.932 19:09:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.932 19:09:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:51.932 19:09:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.932 19:09:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:51.932 19:09:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:51.932 19:09:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.932 19:09:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.932 [2024-11-27 19:09:01.565411] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:52.191 19:09:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.191 19:09:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:52.191 19:09:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:52.191 19:09:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:52.191 19:09:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:52.191 19:09:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:52.191 19:09:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:52.191 19:09:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:52.191 19:09:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:52.191 19:09:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:52.191 19:09:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:52.191 19:09:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:52.191 19:09:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:52.191 19:09:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.191 19:09:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.191 19:09:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.191 19:09:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:52.191 "name": "Existed_Raid", 00:10:52.191 "uuid": "e217e5a3-1e07-45e3-9994-da537ee347b2", 00:10:52.191 "strip_size_kb": 0, 00:10:52.191 "state": "configuring", 00:10:52.191 "raid_level": "raid1", 00:10:52.191 "superblock": true, 00:10:52.191 "num_base_bdevs": 3, 00:10:52.191 "num_base_bdevs_discovered": 1, 00:10:52.191 "num_base_bdevs_operational": 3, 00:10:52.191 "base_bdevs_list": [ 00:10:52.191 { 00:10:52.191 "name": null, 00:10:52.191 "uuid": "61f3c748-9bb2-4163-a264-b401c00730af", 00:10:52.191 "is_configured": false, 00:10:52.191 "data_offset": 0, 00:10:52.191 "data_size": 63488 00:10:52.191 }, 00:10:52.191 { 00:10:52.191 "name": null, 00:10:52.191 "uuid": "f16a3544-63ee-4e80-871b-41cd0aa21421", 00:10:52.191 "is_configured": false, 00:10:52.191 "data_offset": 0, 00:10:52.191 "data_size": 63488 00:10:52.191 }, 00:10:52.191 { 00:10:52.191 "name": "BaseBdev3", 00:10:52.191 "uuid": "56e616a7-76f2-46b4-9fa8-b421544a4ae0", 00:10:52.191 "is_configured": true, 00:10:52.191 "data_offset": 2048, 00:10:52.191 "data_size": 63488 00:10:52.191 } 00:10:52.191 ] 00:10:52.191 }' 00:10:52.191 19:09:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:52.191 19:09:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.759 19:09:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:52.759 19:09:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:52.759 19:09:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.759 19:09:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.759 19:09:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.759 19:09:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:52.759 19:09:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:52.759 19:09:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.759 19:09:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.759 [2024-11-27 19:09:02.132577] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:52.759 19:09:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.759 19:09:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:52.759 19:09:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:52.759 19:09:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:52.759 19:09:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:52.759 19:09:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:52.759 19:09:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:52.759 19:09:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:52.759 19:09:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:52.759 19:09:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:52.759 19:09:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:52.759 19:09:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:52.759 19:09:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:52.759 19:09:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.759 19:09:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.759 19:09:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.759 19:09:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:52.759 "name": "Existed_Raid", 00:10:52.759 "uuid": "e217e5a3-1e07-45e3-9994-da537ee347b2", 00:10:52.759 "strip_size_kb": 0, 00:10:52.759 "state": "configuring", 00:10:52.759 "raid_level": "raid1", 00:10:52.759 "superblock": true, 00:10:52.759 "num_base_bdevs": 3, 00:10:52.759 "num_base_bdevs_discovered": 2, 00:10:52.759 "num_base_bdevs_operational": 3, 00:10:52.759 "base_bdevs_list": [ 00:10:52.759 { 00:10:52.759 "name": null, 00:10:52.759 "uuid": "61f3c748-9bb2-4163-a264-b401c00730af", 00:10:52.759 "is_configured": false, 00:10:52.759 "data_offset": 0, 00:10:52.759 "data_size": 63488 00:10:52.759 }, 00:10:52.759 { 00:10:52.759 "name": "BaseBdev2", 00:10:52.759 "uuid": "f16a3544-63ee-4e80-871b-41cd0aa21421", 00:10:52.759 "is_configured": true, 00:10:52.759 "data_offset": 2048, 00:10:52.759 "data_size": 63488 00:10:52.759 }, 00:10:52.759 { 00:10:52.759 "name": "BaseBdev3", 00:10:52.759 "uuid": "56e616a7-76f2-46b4-9fa8-b421544a4ae0", 00:10:52.759 "is_configured": true, 00:10:52.759 "data_offset": 2048, 00:10:52.759 "data_size": 63488 00:10:52.759 } 00:10:52.759 ] 00:10:52.759 }' 00:10:52.759 19:09:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:52.759 19:09:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.018 19:09:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:53.018 19:09:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:53.018 19:09:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.018 19:09:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.018 19:09:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.018 19:09:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:53.018 19:09:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:53.018 19:09:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:53.018 19:09:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.018 19:09:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.276 19:09:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.276 19:09:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 61f3c748-9bb2-4163-a264-b401c00730af 00:10:53.276 19:09:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.276 19:09:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.276 [2024-11-27 19:09:02.730858] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:53.276 [2024-11-27 19:09:02.731209] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:53.276 [2024-11-27 19:09:02.731261] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:53.276 NewBaseBdev 00:10:53.276 [2024-11-27 19:09:02.731560] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:53.276 [2024-11-27 19:09:02.731737] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:53.276 [2024-11-27 19:09:02.731789] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:53.276 [2024-11-27 19:09:02.731975] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:53.276 19:09:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.276 19:09:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:53.276 19:09:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:10:53.276 19:09:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:53.276 19:09:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:53.276 19:09:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:53.276 19:09:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:53.276 19:09:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:53.276 19:09:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.276 19:09:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.276 19:09:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.276 19:09:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:53.276 19:09:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.276 19:09:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.276 [ 00:10:53.276 { 00:10:53.276 "name": "NewBaseBdev", 00:10:53.276 "aliases": [ 00:10:53.276 "61f3c748-9bb2-4163-a264-b401c00730af" 00:10:53.276 ], 00:10:53.276 "product_name": "Malloc disk", 00:10:53.276 "block_size": 512, 00:10:53.276 "num_blocks": 65536, 00:10:53.276 "uuid": "61f3c748-9bb2-4163-a264-b401c00730af", 00:10:53.276 "assigned_rate_limits": { 00:10:53.276 "rw_ios_per_sec": 0, 00:10:53.276 "rw_mbytes_per_sec": 0, 00:10:53.276 "r_mbytes_per_sec": 0, 00:10:53.276 "w_mbytes_per_sec": 0 00:10:53.276 }, 00:10:53.276 "claimed": true, 00:10:53.276 "claim_type": "exclusive_write", 00:10:53.276 "zoned": false, 00:10:53.276 "supported_io_types": { 00:10:53.277 "read": true, 00:10:53.277 "write": true, 00:10:53.277 "unmap": true, 00:10:53.277 "flush": true, 00:10:53.277 "reset": true, 00:10:53.277 "nvme_admin": false, 00:10:53.277 "nvme_io": false, 00:10:53.277 "nvme_io_md": false, 00:10:53.277 "write_zeroes": true, 00:10:53.277 "zcopy": true, 00:10:53.277 "get_zone_info": false, 00:10:53.277 "zone_management": false, 00:10:53.277 "zone_append": false, 00:10:53.277 "compare": false, 00:10:53.277 "compare_and_write": false, 00:10:53.277 "abort": true, 00:10:53.277 "seek_hole": false, 00:10:53.277 "seek_data": false, 00:10:53.277 "copy": true, 00:10:53.277 "nvme_iov_md": false 00:10:53.277 }, 00:10:53.277 "memory_domains": [ 00:10:53.277 { 00:10:53.277 "dma_device_id": "system", 00:10:53.277 "dma_device_type": 1 00:10:53.277 }, 00:10:53.277 { 00:10:53.277 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:53.277 "dma_device_type": 2 00:10:53.277 } 00:10:53.277 ], 00:10:53.277 "driver_specific": {} 00:10:53.277 } 00:10:53.277 ] 00:10:53.277 19:09:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.277 19:09:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:53.277 19:09:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:10:53.277 19:09:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:53.277 19:09:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:53.277 19:09:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:53.277 19:09:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:53.277 19:09:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:53.277 19:09:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:53.277 19:09:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:53.277 19:09:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:53.277 19:09:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:53.277 19:09:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:53.277 19:09:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:53.277 19:09:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.277 19:09:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.277 19:09:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.277 19:09:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:53.277 "name": "Existed_Raid", 00:10:53.277 "uuid": "e217e5a3-1e07-45e3-9994-da537ee347b2", 00:10:53.277 "strip_size_kb": 0, 00:10:53.277 "state": "online", 00:10:53.277 "raid_level": "raid1", 00:10:53.277 "superblock": true, 00:10:53.277 "num_base_bdevs": 3, 00:10:53.277 "num_base_bdevs_discovered": 3, 00:10:53.277 "num_base_bdevs_operational": 3, 00:10:53.277 "base_bdevs_list": [ 00:10:53.277 { 00:10:53.277 "name": "NewBaseBdev", 00:10:53.277 "uuid": "61f3c748-9bb2-4163-a264-b401c00730af", 00:10:53.277 "is_configured": true, 00:10:53.277 "data_offset": 2048, 00:10:53.277 "data_size": 63488 00:10:53.277 }, 00:10:53.277 { 00:10:53.277 "name": "BaseBdev2", 00:10:53.277 "uuid": "f16a3544-63ee-4e80-871b-41cd0aa21421", 00:10:53.277 "is_configured": true, 00:10:53.277 "data_offset": 2048, 00:10:53.277 "data_size": 63488 00:10:53.277 }, 00:10:53.277 { 00:10:53.277 "name": "BaseBdev3", 00:10:53.277 "uuid": "56e616a7-76f2-46b4-9fa8-b421544a4ae0", 00:10:53.277 "is_configured": true, 00:10:53.277 "data_offset": 2048, 00:10:53.277 "data_size": 63488 00:10:53.277 } 00:10:53.277 ] 00:10:53.277 }' 00:10:53.277 19:09:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:53.277 19:09:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.848 19:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:53.848 19:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:53.848 19:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:53.848 19:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:53.848 19:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:53.848 19:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:53.848 19:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:53.848 19:09:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.848 19:09:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.848 19:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:53.848 [2024-11-27 19:09:03.218435] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:53.848 19:09:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.848 19:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:53.848 "name": "Existed_Raid", 00:10:53.848 "aliases": [ 00:10:53.848 "e217e5a3-1e07-45e3-9994-da537ee347b2" 00:10:53.848 ], 00:10:53.848 "product_name": "Raid Volume", 00:10:53.848 "block_size": 512, 00:10:53.848 "num_blocks": 63488, 00:10:53.848 "uuid": "e217e5a3-1e07-45e3-9994-da537ee347b2", 00:10:53.848 "assigned_rate_limits": { 00:10:53.848 "rw_ios_per_sec": 0, 00:10:53.848 "rw_mbytes_per_sec": 0, 00:10:53.848 "r_mbytes_per_sec": 0, 00:10:53.848 "w_mbytes_per_sec": 0 00:10:53.848 }, 00:10:53.848 "claimed": false, 00:10:53.848 "zoned": false, 00:10:53.848 "supported_io_types": { 00:10:53.848 "read": true, 00:10:53.848 "write": true, 00:10:53.848 "unmap": false, 00:10:53.848 "flush": false, 00:10:53.848 "reset": true, 00:10:53.848 "nvme_admin": false, 00:10:53.848 "nvme_io": false, 00:10:53.848 "nvme_io_md": false, 00:10:53.848 "write_zeroes": true, 00:10:53.849 "zcopy": false, 00:10:53.849 "get_zone_info": false, 00:10:53.849 "zone_management": false, 00:10:53.849 "zone_append": false, 00:10:53.849 "compare": false, 00:10:53.849 "compare_and_write": false, 00:10:53.849 "abort": false, 00:10:53.849 "seek_hole": false, 00:10:53.849 "seek_data": false, 00:10:53.849 "copy": false, 00:10:53.849 "nvme_iov_md": false 00:10:53.849 }, 00:10:53.849 "memory_domains": [ 00:10:53.849 { 00:10:53.849 "dma_device_id": "system", 00:10:53.849 "dma_device_type": 1 00:10:53.849 }, 00:10:53.849 { 00:10:53.849 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:53.849 "dma_device_type": 2 00:10:53.849 }, 00:10:53.849 { 00:10:53.849 "dma_device_id": "system", 00:10:53.849 "dma_device_type": 1 00:10:53.849 }, 00:10:53.849 { 00:10:53.849 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:53.849 "dma_device_type": 2 00:10:53.849 }, 00:10:53.849 { 00:10:53.849 "dma_device_id": "system", 00:10:53.849 "dma_device_type": 1 00:10:53.849 }, 00:10:53.849 { 00:10:53.849 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:53.849 "dma_device_type": 2 00:10:53.849 } 00:10:53.849 ], 00:10:53.849 "driver_specific": { 00:10:53.849 "raid": { 00:10:53.849 "uuid": "e217e5a3-1e07-45e3-9994-da537ee347b2", 00:10:53.849 "strip_size_kb": 0, 00:10:53.849 "state": "online", 00:10:53.849 "raid_level": "raid1", 00:10:53.849 "superblock": true, 00:10:53.849 "num_base_bdevs": 3, 00:10:53.849 "num_base_bdevs_discovered": 3, 00:10:53.849 "num_base_bdevs_operational": 3, 00:10:53.849 "base_bdevs_list": [ 00:10:53.849 { 00:10:53.849 "name": "NewBaseBdev", 00:10:53.849 "uuid": "61f3c748-9bb2-4163-a264-b401c00730af", 00:10:53.849 "is_configured": true, 00:10:53.849 "data_offset": 2048, 00:10:53.849 "data_size": 63488 00:10:53.849 }, 00:10:53.849 { 00:10:53.849 "name": "BaseBdev2", 00:10:53.849 "uuid": "f16a3544-63ee-4e80-871b-41cd0aa21421", 00:10:53.849 "is_configured": true, 00:10:53.849 "data_offset": 2048, 00:10:53.849 "data_size": 63488 00:10:53.849 }, 00:10:53.849 { 00:10:53.849 "name": "BaseBdev3", 00:10:53.849 "uuid": "56e616a7-76f2-46b4-9fa8-b421544a4ae0", 00:10:53.849 "is_configured": true, 00:10:53.849 "data_offset": 2048, 00:10:53.849 "data_size": 63488 00:10:53.849 } 00:10:53.849 ] 00:10:53.849 } 00:10:53.849 } 00:10:53.849 }' 00:10:53.849 19:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:53.849 19:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:53.849 BaseBdev2 00:10:53.849 BaseBdev3' 00:10:53.849 19:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:53.849 19:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:53.849 19:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:53.849 19:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:53.849 19:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:53.849 19:09:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.849 19:09:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.849 19:09:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.849 19:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:53.849 19:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:53.849 19:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:53.849 19:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:53.849 19:09:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.849 19:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:53.849 19:09:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.849 19:09:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.849 19:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:53.849 19:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:53.849 19:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:53.849 19:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:53.849 19:09:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.849 19:09:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.849 19:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:53.849 19:09:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.110 19:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:54.110 19:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:54.110 19:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:54.110 19:09:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.110 19:09:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.110 [2024-11-27 19:09:03.509622] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:54.110 [2024-11-27 19:09:03.509719] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:54.110 [2024-11-27 19:09:03.509825] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:54.110 [2024-11-27 19:09:03.510146] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:54.110 [2024-11-27 19:09:03.510157] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:54.110 19:09:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.110 19:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 68121 00:10:54.110 19:09:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 68121 ']' 00:10:54.110 19:09:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 68121 00:10:54.110 19:09:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:10:54.110 19:09:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:54.110 19:09:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68121 00:10:54.110 19:09:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:54.110 19:09:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:54.110 19:09:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68121' 00:10:54.110 killing process with pid 68121 00:10:54.110 19:09:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 68121 00:10:54.110 [2024-11-27 19:09:03.556245] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:54.110 19:09:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 68121 00:10:54.370 [2024-11-27 19:09:03.881248] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:55.752 19:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:10:55.752 00:10:55.752 real 0m10.809s 00:10:55.752 user 0m16.772s 00:10:55.752 sys 0m2.162s 00:10:55.753 19:09:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:55.753 ************************************ 00:10:55.753 END TEST raid_state_function_test_sb 00:10:55.753 ************************************ 00:10:55.753 19:09:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.753 19:09:05 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:10:55.753 19:09:05 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:55.753 19:09:05 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:55.753 19:09:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:55.753 ************************************ 00:10:55.753 START TEST raid_superblock_test 00:10:55.753 ************************************ 00:10:55.753 19:09:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 3 00:10:55.753 19:09:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:10:55.753 19:09:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:10:55.753 19:09:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:10:55.753 19:09:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:10:55.753 19:09:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:10:55.753 19:09:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:10:55.753 19:09:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:10:55.753 19:09:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:10:55.753 19:09:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:10:55.753 19:09:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:10:55.753 19:09:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:10:55.753 19:09:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:10:55.753 19:09:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:10:55.753 19:09:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:10:55.753 19:09:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:10:55.753 19:09:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=68741 00:10:55.753 19:09:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:10:55.753 19:09:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 68741 00:10:55.753 19:09:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 68741 ']' 00:10:55.753 19:09:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:55.753 19:09:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:55.753 19:09:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:55.753 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:55.753 19:09:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:55.753 19:09:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.753 [2024-11-27 19:09:05.276818] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:10:55.753 [2024-11-27 19:09:05.277090] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68741 ] 00:10:56.013 [2024-11-27 19:09:05.437633] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:56.013 [2024-11-27 19:09:05.581590] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:56.273 [2024-11-27 19:09:05.830913] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:56.273 [2024-11-27 19:09:05.831119] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:56.534 19:09:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:56.534 19:09:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:10:56.534 19:09:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:10:56.534 19:09:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:56.534 19:09:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:10:56.534 19:09:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:10:56.534 19:09:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:56.534 19:09:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:56.534 19:09:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:56.534 19:09:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:56.534 19:09:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:10:56.534 19:09:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.534 19:09:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.534 malloc1 00:10:56.534 19:09:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.534 19:09:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:56.534 19:09:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.534 19:09:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.534 [2024-11-27 19:09:06.163736] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:56.534 [2024-11-27 19:09:06.163850] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:56.534 [2024-11-27 19:09:06.163902] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:56.534 [2024-11-27 19:09:06.163936] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:56.534 [2024-11-27 19:09:06.166454] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:56.534 [2024-11-27 19:09:06.166528] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:56.534 pt1 00:10:56.534 19:09:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.794 19:09:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:56.794 19:09:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:56.794 19:09:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:10:56.795 19:09:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:10:56.795 19:09:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:56.795 19:09:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:56.795 19:09:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:56.795 19:09:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:56.795 19:09:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:10:56.795 19:09:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.795 19:09:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.795 malloc2 00:10:56.795 19:09:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.795 19:09:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:56.795 19:09:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.795 19:09:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.795 [2024-11-27 19:09:06.230318] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:56.795 [2024-11-27 19:09:06.230422] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:56.795 [2024-11-27 19:09:06.230475] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:56.795 [2024-11-27 19:09:06.230509] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:56.795 [2024-11-27 19:09:06.232997] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:56.795 [2024-11-27 19:09:06.233080] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:56.795 pt2 00:10:56.795 19:09:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.795 19:09:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:56.795 19:09:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:56.795 19:09:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:10:56.795 19:09:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:10:56.795 19:09:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:10:56.795 19:09:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:56.795 19:09:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:56.795 19:09:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:56.795 19:09:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:10:56.795 19:09:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.795 19:09:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.795 malloc3 00:10:56.795 19:09:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.795 19:09:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:56.795 19:09:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.795 19:09:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.795 [2024-11-27 19:09:06.309570] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:56.795 [2024-11-27 19:09:06.309672] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:56.795 [2024-11-27 19:09:06.309733] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:10:56.795 [2024-11-27 19:09:06.309763] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:56.795 [2024-11-27 19:09:06.312289] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:56.795 [2024-11-27 19:09:06.312362] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:56.795 pt3 00:10:56.795 19:09:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.795 19:09:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:56.795 19:09:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:56.795 19:09:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:10:56.795 19:09:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.795 19:09:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.795 [2024-11-27 19:09:06.321601] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:56.795 [2024-11-27 19:09:06.323762] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:56.795 [2024-11-27 19:09:06.323878] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:56.795 [2024-11-27 19:09:06.324085] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:10:56.795 [2024-11-27 19:09:06.324141] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:56.795 [2024-11-27 19:09:06.324424] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:56.795 [2024-11-27 19:09:06.324647] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:10:56.795 [2024-11-27 19:09:06.324668] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:10:56.795 [2024-11-27 19:09:06.324849] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:56.795 19:09:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.795 19:09:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:56.795 19:09:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:56.795 19:09:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:56.795 19:09:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:56.795 19:09:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:56.795 19:09:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:56.795 19:09:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:56.795 19:09:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:56.795 19:09:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:56.795 19:09:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:56.795 19:09:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:56.795 19:09:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:56.795 19:09:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.795 19:09:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.795 19:09:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.795 19:09:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:56.795 "name": "raid_bdev1", 00:10:56.795 "uuid": "d0c2be5d-cfa0-46de-ab03-da128c2f056a", 00:10:56.795 "strip_size_kb": 0, 00:10:56.795 "state": "online", 00:10:56.795 "raid_level": "raid1", 00:10:56.795 "superblock": true, 00:10:56.795 "num_base_bdevs": 3, 00:10:56.795 "num_base_bdevs_discovered": 3, 00:10:56.795 "num_base_bdevs_operational": 3, 00:10:56.795 "base_bdevs_list": [ 00:10:56.795 { 00:10:56.795 "name": "pt1", 00:10:56.795 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:56.795 "is_configured": true, 00:10:56.795 "data_offset": 2048, 00:10:56.795 "data_size": 63488 00:10:56.795 }, 00:10:56.795 { 00:10:56.795 "name": "pt2", 00:10:56.795 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:56.795 "is_configured": true, 00:10:56.795 "data_offset": 2048, 00:10:56.795 "data_size": 63488 00:10:56.795 }, 00:10:56.795 { 00:10:56.795 "name": "pt3", 00:10:56.795 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:56.795 "is_configured": true, 00:10:56.795 "data_offset": 2048, 00:10:56.795 "data_size": 63488 00:10:56.795 } 00:10:56.795 ] 00:10:56.795 }' 00:10:56.795 19:09:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:56.795 19:09:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.365 19:09:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:10:57.365 19:09:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:57.365 19:09:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:57.365 19:09:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:57.365 19:09:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:57.365 19:09:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:57.365 19:09:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:57.365 19:09:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:57.365 19:09:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.365 19:09:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.365 [2024-11-27 19:09:06.745210] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:57.365 19:09:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.365 19:09:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:57.365 "name": "raid_bdev1", 00:10:57.365 "aliases": [ 00:10:57.365 "d0c2be5d-cfa0-46de-ab03-da128c2f056a" 00:10:57.365 ], 00:10:57.365 "product_name": "Raid Volume", 00:10:57.365 "block_size": 512, 00:10:57.365 "num_blocks": 63488, 00:10:57.365 "uuid": "d0c2be5d-cfa0-46de-ab03-da128c2f056a", 00:10:57.365 "assigned_rate_limits": { 00:10:57.365 "rw_ios_per_sec": 0, 00:10:57.365 "rw_mbytes_per_sec": 0, 00:10:57.365 "r_mbytes_per_sec": 0, 00:10:57.365 "w_mbytes_per_sec": 0 00:10:57.365 }, 00:10:57.365 "claimed": false, 00:10:57.365 "zoned": false, 00:10:57.365 "supported_io_types": { 00:10:57.365 "read": true, 00:10:57.365 "write": true, 00:10:57.365 "unmap": false, 00:10:57.365 "flush": false, 00:10:57.365 "reset": true, 00:10:57.365 "nvme_admin": false, 00:10:57.365 "nvme_io": false, 00:10:57.365 "nvme_io_md": false, 00:10:57.365 "write_zeroes": true, 00:10:57.365 "zcopy": false, 00:10:57.365 "get_zone_info": false, 00:10:57.365 "zone_management": false, 00:10:57.365 "zone_append": false, 00:10:57.365 "compare": false, 00:10:57.365 "compare_and_write": false, 00:10:57.365 "abort": false, 00:10:57.365 "seek_hole": false, 00:10:57.365 "seek_data": false, 00:10:57.365 "copy": false, 00:10:57.365 "nvme_iov_md": false 00:10:57.365 }, 00:10:57.365 "memory_domains": [ 00:10:57.365 { 00:10:57.365 "dma_device_id": "system", 00:10:57.365 "dma_device_type": 1 00:10:57.365 }, 00:10:57.365 { 00:10:57.365 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:57.365 "dma_device_type": 2 00:10:57.365 }, 00:10:57.365 { 00:10:57.365 "dma_device_id": "system", 00:10:57.365 "dma_device_type": 1 00:10:57.365 }, 00:10:57.365 { 00:10:57.365 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:57.365 "dma_device_type": 2 00:10:57.365 }, 00:10:57.365 { 00:10:57.365 "dma_device_id": "system", 00:10:57.365 "dma_device_type": 1 00:10:57.365 }, 00:10:57.365 { 00:10:57.365 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:57.366 "dma_device_type": 2 00:10:57.366 } 00:10:57.366 ], 00:10:57.366 "driver_specific": { 00:10:57.366 "raid": { 00:10:57.366 "uuid": "d0c2be5d-cfa0-46de-ab03-da128c2f056a", 00:10:57.366 "strip_size_kb": 0, 00:10:57.366 "state": "online", 00:10:57.366 "raid_level": "raid1", 00:10:57.366 "superblock": true, 00:10:57.366 "num_base_bdevs": 3, 00:10:57.366 "num_base_bdevs_discovered": 3, 00:10:57.366 "num_base_bdevs_operational": 3, 00:10:57.366 "base_bdevs_list": [ 00:10:57.366 { 00:10:57.366 "name": "pt1", 00:10:57.366 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:57.366 "is_configured": true, 00:10:57.366 "data_offset": 2048, 00:10:57.366 "data_size": 63488 00:10:57.366 }, 00:10:57.366 { 00:10:57.366 "name": "pt2", 00:10:57.366 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:57.366 "is_configured": true, 00:10:57.366 "data_offset": 2048, 00:10:57.366 "data_size": 63488 00:10:57.366 }, 00:10:57.366 { 00:10:57.366 "name": "pt3", 00:10:57.366 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:57.366 "is_configured": true, 00:10:57.366 "data_offset": 2048, 00:10:57.366 "data_size": 63488 00:10:57.366 } 00:10:57.366 ] 00:10:57.366 } 00:10:57.366 } 00:10:57.366 }' 00:10:57.366 19:09:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:57.366 19:09:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:57.366 pt2 00:10:57.366 pt3' 00:10:57.366 19:09:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:57.366 19:09:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:57.366 19:09:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:57.366 19:09:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:57.366 19:09:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:57.366 19:09:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.366 19:09:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.366 19:09:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.366 19:09:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:57.366 19:09:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:57.366 19:09:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:57.366 19:09:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:57.366 19:09:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.366 19:09:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.366 19:09:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:57.366 19:09:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.366 19:09:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:57.366 19:09:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:57.366 19:09:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:57.366 19:09:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:57.366 19:09:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:57.366 19:09:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.366 19:09:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.366 19:09:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.366 19:09:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:57.366 19:09:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:57.366 19:09:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:57.366 19:09:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:10:57.366 19:09:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.366 19:09:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.366 [2024-11-27 19:09:06.976746] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:57.366 19:09:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.626 19:09:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=d0c2be5d-cfa0-46de-ab03-da128c2f056a 00:10:57.626 19:09:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z d0c2be5d-cfa0-46de-ab03-da128c2f056a ']' 00:10:57.626 19:09:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:57.626 19:09:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.626 19:09:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.626 [2024-11-27 19:09:07.020378] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:57.626 [2024-11-27 19:09:07.020413] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:57.626 [2024-11-27 19:09:07.020524] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:57.626 [2024-11-27 19:09:07.020608] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:57.626 [2024-11-27 19:09:07.020619] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:10:57.626 19:09:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.626 19:09:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:10:57.626 19:09:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:57.626 19:09:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.626 19:09:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.626 19:09:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.626 19:09:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:10:57.626 19:09:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:10:57.626 19:09:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:57.626 19:09:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:10:57.626 19:09:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.626 19:09:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.626 19:09:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.626 19:09:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:57.626 19:09:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:10:57.626 19:09:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.626 19:09:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.626 19:09:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.626 19:09:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:57.626 19:09:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:10:57.626 19:09:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.626 19:09:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.626 19:09:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.626 19:09:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:10:57.626 19:09:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:57.626 19:09:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.626 19:09:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.627 19:09:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.627 19:09:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:10:57.627 19:09:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:57.627 19:09:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:10:57.627 19:09:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:57.627 19:09:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:10:57.627 19:09:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:57.627 19:09:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:10:57.627 19:09:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:57.627 19:09:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:57.627 19:09:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.627 19:09:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.627 [2024-11-27 19:09:07.156204] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:57.627 [2024-11-27 19:09:07.158479] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:57.627 [2024-11-27 19:09:07.158594] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:10:57.627 [2024-11-27 19:09:07.158686] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:57.627 [2024-11-27 19:09:07.158805] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:57.627 [2024-11-27 19:09:07.158877] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:10:57.627 [2024-11-27 19:09:07.158929] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:57.627 [2024-11-27 19:09:07.158964] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:10:57.627 request: 00:10:57.627 { 00:10:57.627 "name": "raid_bdev1", 00:10:57.627 "raid_level": "raid1", 00:10:57.627 "base_bdevs": [ 00:10:57.627 "malloc1", 00:10:57.627 "malloc2", 00:10:57.627 "malloc3" 00:10:57.627 ], 00:10:57.627 "superblock": false, 00:10:57.627 "method": "bdev_raid_create", 00:10:57.627 "req_id": 1 00:10:57.627 } 00:10:57.627 Got JSON-RPC error response 00:10:57.627 response: 00:10:57.627 { 00:10:57.627 "code": -17, 00:10:57.627 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:57.627 } 00:10:57.627 19:09:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:57.627 19:09:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:10:57.627 19:09:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:57.627 19:09:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:57.627 19:09:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:57.627 19:09:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:10:57.627 19:09:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:57.627 19:09:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.627 19:09:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.627 19:09:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.627 19:09:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:10:57.627 19:09:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:10:57.627 19:09:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:57.627 19:09:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.627 19:09:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.627 [2024-11-27 19:09:07.208035] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:57.627 [2024-11-27 19:09:07.208127] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:57.627 [2024-11-27 19:09:07.208172] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:57.627 [2024-11-27 19:09:07.208202] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:57.627 [2024-11-27 19:09:07.210756] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:57.627 [2024-11-27 19:09:07.210844] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:57.627 [2024-11-27 19:09:07.210957] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:57.627 [2024-11-27 19:09:07.211049] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:57.627 pt1 00:10:57.627 19:09:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.627 19:09:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:10:57.627 19:09:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:57.627 19:09:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:57.627 19:09:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:57.627 19:09:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:57.627 19:09:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:57.627 19:09:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:57.627 19:09:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:57.627 19:09:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:57.627 19:09:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:57.627 19:09:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:57.627 19:09:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:57.627 19:09:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.627 19:09:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.627 19:09:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.887 19:09:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:57.887 "name": "raid_bdev1", 00:10:57.887 "uuid": "d0c2be5d-cfa0-46de-ab03-da128c2f056a", 00:10:57.887 "strip_size_kb": 0, 00:10:57.887 "state": "configuring", 00:10:57.887 "raid_level": "raid1", 00:10:57.887 "superblock": true, 00:10:57.887 "num_base_bdevs": 3, 00:10:57.887 "num_base_bdevs_discovered": 1, 00:10:57.887 "num_base_bdevs_operational": 3, 00:10:57.887 "base_bdevs_list": [ 00:10:57.887 { 00:10:57.887 "name": "pt1", 00:10:57.887 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:57.887 "is_configured": true, 00:10:57.887 "data_offset": 2048, 00:10:57.887 "data_size": 63488 00:10:57.887 }, 00:10:57.887 { 00:10:57.887 "name": null, 00:10:57.887 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:57.887 "is_configured": false, 00:10:57.887 "data_offset": 2048, 00:10:57.887 "data_size": 63488 00:10:57.887 }, 00:10:57.887 { 00:10:57.887 "name": null, 00:10:57.887 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:57.887 "is_configured": false, 00:10:57.887 "data_offset": 2048, 00:10:57.887 "data_size": 63488 00:10:57.887 } 00:10:57.887 ] 00:10:57.887 }' 00:10:57.887 19:09:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:57.887 19:09:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.146 19:09:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:10:58.146 19:09:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:58.146 19:09:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.146 19:09:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.146 [2024-11-27 19:09:07.611440] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:58.146 [2024-11-27 19:09:07.611569] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:58.146 [2024-11-27 19:09:07.611625] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:10:58.146 [2024-11-27 19:09:07.611656] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:58.146 [2024-11-27 19:09:07.612243] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:58.146 [2024-11-27 19:09:07.612305] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:58.146 [2024-11-27 19:09:07.612426] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:58.146 [2024-11-27 19:09:07.612455] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:58.146 pt2 00:10:58.146 19:09:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.146 19:09:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:10:58.146 19:09:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.147 19:09:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.147 [2024-11-27 19:09:07.623406] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:10:58.147 19:09:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.147 19:09:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:10:58.147 19:09:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:58.147 19:09:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:58.147 19:09:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:58.147 19:09:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:58.147 19:09:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:58.147 19:09:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:58.147 19:09:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:58.147 19:09:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:58.147 19:09:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:58.147 19:09:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:58.147 19:09:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:58.147 19:09:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.147 19:09:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.147 19:09:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.147 19:09:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:58.147 "name": "raid_bdev1", 00:10:58.147 "uuid": "d0c2be5d-cfa0-46de-ab03-da128c2f056a", 00:10:58.147 "strip_size_kb": 0, 00:10:58.147 "state": "configuring", 00:10:58.147 "raid_level": "raid1", 00:10:58.147 "superblock": true, 00:10:58.147 "num_base_bdevs": 3, 00:10:58.147 "num_base_bdevs_discovered": 1, 00:10:58.147 "num_base_bdevs_operational": 3, 00:10:58.147 "base_bdevs_list": [ 00:10:58.147 { 00:10:58.147 "name": "pt1", 00:10:58.147 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:58.147 "is_configured": true, 00:10:58.147 "data_offset": 2048, 00:10:58.147 "data_size": 63488 00:10:58.147 }, 00:10:58.147 { 00:10:58.147 "name": null, 00:10:58.147 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:58.147 "is_configured": false, 00:10:58.147 "data_offset": 0, 00:10:58.147 "data_size": 63488 00:10:58.147 }, 00:10:58.147 { 00:10:58.147 "name": null, 00:10:58.147 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:58.147 "is_configured": false, 00:10:58.147 "data_offset": 2048, 00:10:58.147 "data_size": 63488 00:10:58.147 } 00:10:58.147 ] 00:10:58.147 }' 00:10:58.147 19:09:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:58.147 19:09:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.715 19:09:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:10:58.715 19:09:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:58.715 19:09:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:58.715 19:09:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.715 19:09:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.715 [2024-11-27 19:09:08.058676] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:58.715 [2024-11-27 19:09:08.058847] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:58.715 [2024-11-27 19:09:08.058899] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:10:58.715 [2024-11-27 19:09:08.058933] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:58.715 [2024-11-27 19:09:08.059537] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:58.715 [2024-11-27 19:09:08.059602] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:58.715 [2024-11-27 19:09:08.059750] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:58.715 [2024-11-27 19:09:08.059827] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:58.715 pt2 00:10:58.715 19:09:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.715 19:09:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:58.716 19:09:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:58.716 19:09:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:58.716 19:09:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.716 19:09:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.716 [2024-11-27 19:09:08.070609] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:58.716 [2024-11-27 19:09:08.070724] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:58.716 [2024-11-27 19:09:08.070765] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:58.716 [2024-11-27 19:09:08.070803] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:58.716 [2024-11-27 19:09:08.071323] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:58.716 [2024-11-27 19:09:08.071392] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:58.716 [2024-11-27 19:09:08.071490] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:58.716 [2024-11-27 19:09:08.071543] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:58.716 [2024-11-27 19:09:08.071731] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:58.716 [2024-11-27 19:09:08.071778] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:58.716 [2024-11-27 19:09:08.072078] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:58.716 [2024-11-27 19:09:08.072287] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:58.716 [2024-11-27 19:09:08.072326] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:10:58.716 [2024-11-27 19:09:08.072524] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:58.716 pt3 00:10:58.716 19:09:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.716 19:09:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:58.716 19:09:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:58.716 19:09:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:58.716 19:09:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:58.716 19:09:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:58.716 19:09:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:58.716 19:09:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:58.716 19:09:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:58.716 19:09:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:58.716 19:09:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:58.716 19:09:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:58.716 19:09:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:58.716 19:09:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:58.716 19:09:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:58.716 19:09:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.716 19:09:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.716 19:09:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.716 19:09:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:58.716 "name": "raid_bdev1", 00:10:58.716 "uuid": "d0c2be5d-cfa0-46de-ab03-da128c2f056a", 00:10:58.716 "strip_size_kb": 0, 00:10:58.716 "state": "online", 00:10:58.716 "raid_level": "raid1", 00:10:58.716 "superblock": true, 00:10:58.716 "num_base_bdevs": 3, 00:10:58.716 "num_base_bdevs_discovered": 3, 00:10:58.716 "num_base_bdevs_operational": 3, 00:10:58.716 "base_bdevs_list": [ 00:10:58.716 { 00:10:58.716 "name": "pt1", 00:10:58.716 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:58.716 "is_configured": true, 00:10:58.716 "data_offset": 2048, 00:10:58.716 "data_size": 63488 00:10:58.716 }, 00:10:58.716 { 00:10:58.716 "name": "pt2", 00:10:58.716 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:58.716 "is_configured": true, 00:10:58.716 "data_offset": 2048, 00:10:58.716 "data_size": 63488 00:10:58.716 }, 00:10:58.716 { 00:10:58.716 "name": "pt3", 00:10:58.716 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:58.716 "is_configured": true, 00:10:58.716 "data_offset": 2048, 00:10:58.716 "data_size": 63488 00:10:58.716 } 00:10:58.716 ] 00:10:58.716 }' 00:10:58.716 19:09:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:58.716 19:09:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.975 19:09:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:10:58.975 19:09:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:58.975 19:09:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:58.975 19:09:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:58.975 19:09:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:58.975 19:09:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:58.975 19:09:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:58.975 19:09:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.976 19:09:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.976 19:09:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:58.976 [2024-11-27 19:09:08.510256] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:58.976 19:09:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.976 19:09:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:58.976 "name": "raid_bdev1", 00:10:58.976 "aliases": [ 00:10:58.976 "d0c2be5d-cfa0-46de-ab03-da128c2f056a" 00:10:58.976 ], 00:10:58.976 "product_name": "Raid Volume", 00:10:58.976 "block_size": 512, 00:10:58.976 "num_blocks": 63488, 00:10:58.976 "uuid": "d0c2be5d-cfa0-46de-ab03-da128c2f056a", 00:10:58.976 "assigned_rate_limits": { 00:10:58.976 "rw_ios_per_sec": 0, 00:10:58.976 "rw_mbytes_per_sec": 0, 00:10:58.976 "r_mbytes_per_sec": 0, 00:10:58.976 "w_mbytes_per_sec": 0 00:10:58.976 }, 00:10:58.976 "claimed": false, 00:10:58.976 "zoned": false, 00:10:58.976 "supported_io_types": { 00:10:58.976 "read": true, 00:10:58.976 "write": true, 00:10:58.976 "unmap": false, 00:10:58.976 "flush": false, 00:10:58.976 "reset": true, 00:10:58.976 "nvme_admin": false, 00:10:58.976 "nvme_io": false, 00:10:58.976 "nvme_io_md": false, 00:10:58.976 "write_zeroes": true, 00:10:58.976 "zcopy": false, 00:10:58.976 "get_zone_info": false, 00:10:58.976 "zone_management": false, 00:10:58.976 "zone_append": false, 00:10:58.976 "compare": false, 00:10:58.976 "compare_and_write": false, 00:10:58.976 "abort": false, 00:10:58.976 "seek_hole": false, 00:10:58.976 "seek_data": false, 00:10:58.976 "copy": false, 00:10:58.976 "nvme_iov_md": false 00:10:58.976 }, 00:10:58.976 "memory_domains": [ 00:10:58.976 { 00:10:58.976 "dma_device_id": "system", 00:10:58.976 "dma_device_type": 1 00:10:58.976 }, 00:10:58.976 { 00:10:58.976 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:58.976 "dma_device_type": 2 00:10:58.976 }, 00:10:58.976 { 00:10:58.976 "dma_device_id": "system", 00:10:58.976 "dma_device_type": 1 00:10:58.976 }, 00:10:58.976 { 00:10:58.976 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:58.976 "dma_device_type": 2 00:10:58.976 }, 00:10:58.976 { 00:10:58.976 "dma_device_id": "system", 00:10:58.976 "dma_device_type": 1 00:10:58.976 }, 00:10:58.976 { 00:10:58.976 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:58.976 "dma_device_type": 2 00:10:58.976 } 00:10:58.976 ], 00:10:58.976 "driver_specific": { 00:10:58.976 "raid": { 00:10:58.976 "uuid": "d0c2be5d-cfa0-46de-ab03-da128c2f056a", 00:10:58.976 "strip_size_kb": 0, 00:10:58.976 "state": "online", 00:10:58.976 "raid_level": "raid1", 00:10:58.976 "superblock": true, 00:10:58.976 "num_base_bdevs": 3, 00:10:58.976 "num_base_bdevs_discovered": 3, 00:10:58.976 "num_base_bdevs_operational": 3, 00:10:58.976 "base_bdevs_list": [ 00:10:58.976 { 00:10:58.976 "name": "pt1", 00:10:58.976 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:58.976 "is_configured": true, 00:10:58.976 "data_offset": 2048, 00:10:58.976 "data_size": 63488 00:10:58.976 }, 00:10:58.976 { 00:10:58.976 "name": "pt2", 00:10:58.976 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:58.976 "is_configured": true, 00:10:58.976 "data_offset": 2048, 00:10:58.976 "data_size": 63488 00:10:58.976 }, 00:10:58.976 { 00:10:58.976 "name": "pt3", 00:10:58.976 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:58.976 "is_configured": true, 00:10:58.976 "data_offset": 2048, 00:10:58.976 "data_size": 63488 00:10:58.976 } 00:10:58.976 ] 00:10:58.976 } 00:10:58.976 } 00:10:58.976 }' 00:10:58.976 19:09:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:58.976 19:09:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:58.976 pt2 00:10:58.976 pt3' 00:10:58.976 19:09:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:59.236 19:09:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:59.236 19:09:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:59.236 19:09:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:59.236 19:09:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:59.236 19:09:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.236 19:09:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.236 19:09:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.236 19:09:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:59.236 19:09:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:59.236 19:09:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:59.236 19:09:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:59.236 19:09:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:59.236 19:09:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.236 19:09:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.236 19:09:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.236 19:09:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:59.236 19:09:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:59.236 19:09:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:59.236 19:09:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:59.236 19:09:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:59.236 19:09:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.236 19:09:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.236 19:09:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.236 19:09:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:59.236 19:09:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:59.236 19:09:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:59.236 19:09:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.236 19:09:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.236 19:09:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:10:59.236 [2024-11-27 19:09:08.797653] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:59.236 19:09:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.236 19:09:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' d0c2be5d-cfa0-46de-ab03-da128c2f056a '!=' d0c2be5d-cfa0-46de-ab03-da128c2f056a ']' 00:10:59.236 19:09:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:10:59.236 19:09:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:59.236 19:09:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:59.236 19:09:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:10:59.236 19:09:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.236 19:09:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.236 [2024-11-27 19:09:08.829381] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:10:59.236 19:09:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.236 19:09:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:59.236 19:09:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:59.236 19:09:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:59.236 19:09:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:59.236 19:09:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:59.236 19:09:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:59.236 19:09:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:59.236 19:09:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:59.236 19:09:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:59.236 19:09:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:59.236 19:09:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:59.236 19:09:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.236 19:09:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.236 19:09:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:59.236 19:09:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.496 19:09:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:59.496 "name": "raid_bdev1", 00:10:59.496 "uuid": "d0c2be5d-cfa0-46de-ab03-da128c2f056a", 00:10:59.496 "strip_size_kb": 0, 00:10:59.496 "state": "online", 00:10:59.496 "raid_level": "raid1", 00:10:59.496 "superblock": true, 00:10:59.496 "num_base_bdevs": 3, 00:10:59.496 "num_base_bdevs_discovered": 2, 00:10:59.496 "num_base_bdevs_operational": 2, 00:10:59.496 "base_bdevs_list": [ 00:10:59.496 { 00:10:59.496 "name": null, 00:10:59.496 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:59.496 "is_configured": false, 00:10:59.496 "data_offset": 0, 00:10:59.496 "data_size": 63488 00:10:59.496 }, 00:10:59.496 { 00:10:59.496 "name": "pt2", 00:10:59.496 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:59.496 "is_configured": true, 00:10:59.496 "data_offset": 2048, 00:10:59.496 "data_size": 63488 00:10:59.496 }, 00:10:59.496 { 00:10:59.496 "name": "pt3", 00:10:59.496 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:59.496 "is_configured": true, 00:10:59.496 "data_offset": 2048, 00:10:59.496 "data_size": 63488 00:10:59.496 } 00:10:59.496 ] 00:10:59.496 }' 00:10:59.496 19:09:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:59.496 19:09:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.758 19:09:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:59.758 19:09:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.758 19:09:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.758 [2024-11-27 19:09:09.268646] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:59.758 [2024-11-27 19:09:09.268740] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:59.758 [2024-11-27 19:09:09.268881] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:59.758 [2024-11-27 19:09:09.268955] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:59.758 [2024-11-27 19:09:09.268973] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:10:59.758 19:09:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.758 19:09:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:59.758 19:09:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:10:59.759 19:09:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.759 19:09:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.759 19:09:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.759 19:09:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:10:59.759 19:09:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:10:59.759 19:09:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:10:59.759 19:09:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:10:59.759 19:09:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:10:59.759 19:09:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.759 19:09:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.759 19:09:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.759 19:09:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:10:59.759 19:09:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:10:59.759 19:09:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:10:59.759 19:09:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.759 19:09:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.759 19:09:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.759 19:09:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:10:59.759 19:09:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:10:59.759 19:09:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:10:59.759 19:09:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:10:59.760 19:09:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:59.760 19:09:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.760 19:09:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.760 [2024-11-27 19:09:09.356434] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:59.760 [2024-11-27 19:09:09.356501] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:59.760 [2024-11-27 19:09:09.356519] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:10:59.760 [2024-11-27 19:09:09.356532] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:59.760 [2024-11-27 19:09:09.359139] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:59.760 [2024-11-27 19:09:09.359184] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:59.760 [2024-11-27 19:09:09.359275] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:59.760 [2024-11-27 19:09:09.359341] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:59.760 pt2 00:10:59.760 19:09:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.760 19:09:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:10:59.760 19:09:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:59.760 19:09:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:59.760 19:09:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:59.760 19:09:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:59.760 19:09:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:59.760 19:09:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:59.760 19:09:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:59.760 19:09:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:59.761 19:09:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:59.761 19:09:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:59.761 19:09:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:59.761 19:09:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.761 19:09:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.761 19:09:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.024 19:09:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:00.024 "name": "raid_bdev1", 00:11:00.024 "uuid": "d0c2be5d-cfa0-46de-ab03-da128c2f056a", 00:11:00.024 "strip_size_kb": 0, 00:11:00.024 "state": "configuring", 00:11:00.024 "raid_level": "raid1", 00:11:00.024 "superblock": true, 00:11:00.024 "num_base_bdevs": 3, 00:11:00.024 "num_base_bdevs_discovered": 1, 00:11:00.024 "num_base_bdevs_operational": 2, 00:11:00.024 "base_bdevs_list": [ 00:11:00.024 { 00:11:00.024 "name": null, 00:11:00.024 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:00.024 "is_configured": false, 00:11:00.024 "data_offset": 2048, 00:11:00.024 "data_size": 63488 00:11:00.024 }, 00:11:00.024 { 00:11:00.024 "name": "pt2", 00:11:00.024 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:00.024 "is_configured": true, 00:11:00.024 "data_offset": 2048, 00:11:00.024 "data_size": 63488 00:11:00.024 }, 00:11:00.024 { 00:11:00.024 "name": null, 00:11:00.024 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:00.024 "is_configured": false, 00:11:00.024 "data_offset": 2048, 00:11:00.024 "data_size": 63488 00:11:00.024 } 00:11:00.024 ] 00:11:00.024 }' 00:11:00.024 19:09:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:00.024 19:09:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.284 19:09:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:11:00.284 19:09:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:11:00.284 19:09:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:11:00.284 19:09:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:00.284 19:09:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.284 19:09:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.284 [2024-11-27 19:09:09.807770] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:00.284 [2024-11-27 19:09:09.807899] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:00.284 [2024-11-27 19:09:09.807952] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:11:00.284 [2024-11-27 19:09:09.807987] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:00.284 [2024-11-27 19:09:09.808601] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:00.284 [2024-11-27 19:09:09.808673] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:00.284 [2024-11-27 19:09:09.808841] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:00.284 [2024-11-27 19:09:09.808913] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:00.284 [2024-11-27 19:09:09.809093] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:00.284 [2024-11-27 19:09:09.809135] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:00.284 [2024-11-27 19:09:09.809460] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:11:00.284 [2024-11-27 19:09:09.809704] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:00.284 [2024-11-27 19:09:09.809747] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:11:00.284 [2024-11-27 19:09:09.809963] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:00.284 pt3 00:11:00.284 19:09:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.284 19:09:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:00.284 19:09:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:00.284 19:09:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:00.284 19:09:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:00.284 19:09:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:00.284 19:09:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:00.284 19:09:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:00.284 19:09:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:00.284 19:09:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:00.284 19:09:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:00.284 19:09:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:00.284 19:09:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:00.284 19:09:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.284 19:09:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.284 19:09:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.284 19:09:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:00.284 "name": "raid_bdev1", 00:11:00.284 "uuid": "d0c2be5d-cfa0-46de-ab03-da128c2f056a", 00:11:00.284 "strip_size_kb": 0, 00:11:00.284 "state": "online", 00:11:00.284 "raid_level": "raid1", 00:11:00.284 "superblock": true, 00:11:00.284 "num_base_bdevs": 3, 00:11:00.284 "num_base_bdevs_discovered": 2, 00:11:00.284 "num_base_bdevs_operational": 2, 00:11:00.284 "base_bdevs_list": [ 00:11:00.284 { 00:11:00.284 "name": null, 00:11:00.284 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:00.284 "is_configured": false, 00:11:00.284 "data_offset": 2048, 00:11:00.284 "data_size": 63488 00:11:00.284 }, 00:11:00.284 { 00:11:00.284 "name": "pt2", 00:11:00.284 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:00.284 "is_configured": true, 00:11:00.284 "data_offset": 2048, 00:11:00.284 "data_size": 63488 00:11:00.284 }, 00:11:00.284 { 00:11:00.284 "name": "pt3", 00:11:00.284 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:00.284 "is_configured": true, 00:11:00.284 "data_offset": 2048, 00:11:00.284 "data_size": 63488 00:11:00.284 } 00:11:00.284 ] 00:11:00.284 }' 00:11:00.284 19:09:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:00.284 19:09:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.853 19:09:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:00.853 19:09:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.853 19:09:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.853 [2024-11-27 19:09:10.243019] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:00.853 [2024-11-27 19:09:10.243127] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:00.853 [2024-11-27 19:09:10.243252] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:00.853 [2024-11-27 19:09:10.243333] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:00.853 [2024-11-27 19:09:10.243345] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:11:00.853 19:09:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.853 19:09:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:00.853 19:09:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.853 19:09:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.853 19:09:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:11:00.853 19:09:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.853 19:09:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:11:00.853 19:09:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:11:00.853 19:09:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:11:00.853 19:09:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:11:00.853 19:09:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:11:00.853 19:09:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.853 19:09:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.853 19:09:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.853 19:09:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:00.853 19:09:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.853 19:09:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.853 [2024-11-27 19:09:10.298931] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:00.853 [2024-11-27 19:09:10.299038] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:00.853 [2024-11-27 19:09:10.299084] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:11:00.853 [2024-11-27 19:09:10.299122] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:00.853 [2024-11-27 19:09:10.301787] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:00.853 [2024-11-27 19:09:10.301870] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:00.853 [2024-11-27 19:09:10.302020] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:00.853 [2024-11-27 19:09:10.302121] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:00.853 [2024-11-27 19:09:10.302343] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:11:00.853 [2024-11-27 19:09:10.302408] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:00.853 [2024-11-27 19:09:10.302451] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:11:00.853 [2024-11-27 19:09:10.302562] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:00.853 pt1 00:11:00.853 19:09:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.853 19:09:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:11:00.853 19:09:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:11:00.853 19:09:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:00.853 19:09:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:00.853 19:09:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:00.853 19:09:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:00.853 19:09:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:00.853 19:09:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:00.853 19:09:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:00.853 19:09:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:00.853 19:09:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:00.853 19:09:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:00.853 19:09:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.853 19:09:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.853 19:09:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:00.853 19:09:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.853 19:09:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:00.853 "name": "raid_bdev1", 00:11:00.853 "uuid": "d0c2be5d-cfa0-46de-ab03-da128c2f056a", 00:11:00.853 "strip_size_kb": 0, 00:11:00.853 "state": "configuring", 00:11:00.853 "raid_level": "raid1", 00:11:00.853 "superblock": true, 00:11:00.853 "num_base_bdevs": 3, 00:11:00.853 "num_base_bdevs_discovered": 1, 00:11:00.853 "num_base_bdevs_operational": 2, 00:11:00.853 "base_bdevs_list": [ 00:11:00.853 { 00:11:00.853 "name": null, 00:11:00.853 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:00.853 "is_configured": false, 00:11:00.853 "data_offset": 2048, 00:11:00.853 "data_size": 63488 00:11:00.853 }, 00:11:00.853 { 00:11:00.853 "name": "pt2", 00:11:00.853 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:00.853 "is_configured": true, 00:11:00.853 "data_offset": 2048, 00:11:00.853 "data_size": 63488 00:11:00.853 }, 00:11:00.853 { 00:11:00.853 "name": null, 00:11:00.853 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:00.853 "is_configured": false, 00:11:00.853 "data_offset": 2048, 00:11:00.853 "data_size": 63488 00:11:00.853 } 00:11:00.853 ] 00:11:00.853 }' 00:11:00.853 19:09:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:00.853 19:09:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.113 19:09:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:11:01.113 19:09:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.113 19:09:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.113 19:09:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:11:01.374 19:09:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.374 19:09:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:11:01.374 19:09:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:01.374 19:09:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.374 19:09:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.374 [2024-11-27 19:09:10.790161] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:01.374 [2024-11-27 19:09:10.790295] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:01.374 [2024-11-27 19:09:10.790351] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:11:01.374 [2024-11-27 19:09:10.790381] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:01.374 [2024-11-27 19:09:10.791017] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:01.374 [2024-11-27 19:09:10.791078] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:01.374 [2024-11-27 19:09:10.791223] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:01.374 [2024-11-27 19:09:10.791280] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:01.374 [2024-11-27 19:09:10.791468] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:11:01.374 [2024-11-27 19:09:10.791506] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:01.374 [2024-11-27 19:09:10.791827] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:11:01.374 [2024-11-27 19:09:10.792047] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:11:01.374 [2024-11-27 19:09:10.792097] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:11:01.374 [2024-11-27 19:09:10.792287] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:01.374 pt3 00:11:01.374 19:09:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.374 19:09:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:01.374 19:09:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:01.374 19:09:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:01.374 19:09:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:01.374 19:09:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:01.374 19:09:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:01.374 19:09:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:01.374 19:09:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:01.374 19:09:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:01.374 19:09:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:01.374 19:09:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:01.374 19:09:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.374 19:09:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.374 19:09:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:01.374 19:09:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.374 19:09:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:01.374 "name": "raid_bdev1", 00:11:01.374 "uuid": "d0c2be5d-cfa0-46de-ab03-da128c2f056a", 00:11:01.374 "strip_size_kb": 0, 00:11:01.374 "state": "online", 00:11:01.374 "raid_level": "raid1", 00:11:01.374 "superblock": true, 00:11:01.374 "num_base_bdevs": 3, 00:11:01.374 "num_base_bdevs_discovered": 2, 00:11:01.374 "num_base_bdevs_operational": 2, 00:11:01.374 "base_bdevs_list": [ 00:11:01.374 { 00:11:01.374 "name": null, 00:11:01.374 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:01.374 "is_configured": false, 00:11:01.374 "data_offset": 2048, 00:11:01.374 "data_size": 63488 00:11:01.374 }, 00:11:01.374 { 00:11:01.374 "name": "pt2", 00:11:01.374 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:01.374 "is_configured": true, 00:11:01.374 "data_offset": 2048, 00:11:01.374 "data_size": 63488 00:11:01.374 }, 00:11:01.374 { 00:11:01.374 "name": "pt3", 00:11:01.374 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:01.374 "is_configured": true, 00:11:01.374 "data_offset": 2048, 00:11:01.374 "data_size": 63488 00:11:01.374 } 00:11:01.374 ] 00:11:01.374 }' 00:11:01.374 19:09:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:01.374 19:09:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.634 19:09:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:11:01.634 19:09:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.634 19:09:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.634 19:09:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:11:01.634 19:09:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.634 19:09:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:11:01.893 19:09:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:01.893 19:09:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.893 19:09:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.893 19:09:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:11:01.893 [2024-11-27 19:09:11.277656] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:01.893 19:09:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.893 19:09:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' d0c2be5d-cfa0-46de-ab03-da128c2f056a '!=' d0c2be5d-cfa0-46de-ab03-da128c2f056a ']' 00:11:01.893 19:09:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 68741 00:11:01.893 19:09:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 68741 ']' 00:11:01.893 19:09:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 68741 00:11:01.893 19:09:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:11:01.893 19:09:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:01.893 19:09:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68741 00:11:01.893 killing process with pid 68741 00:11:01.893 19:09:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:01.893 19:09:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:01.893 19:09:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68741' 00:11:01.893 19:09:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 68741 00:11:01.893 [2024-11-27 19:09:11.347707] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:01.893 [2024-11-27 19:09:11.347831] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:01.893 19:09:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 68741 00:11:01.893 [2024-11-27 19:09:11.347905] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:01.893 [2024-11-27 19:09:11.347918] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:11:02.153 [2024-11-27 19:09:11.686798] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:03.564 19:09:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:11:03.564 00:11:03.564 real 0m7.784s 00:11:03.564 user 0m11.852s 00:11:03.564 sys 0m1.510s 00:11:03.564 19:09:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:03.564 19:09:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.564 ************************************ 00:11:03.564 END TEST raid_superblock_test 00:11:03.564 ************************************ 00:11:03.564 19:09:13 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 3 read 00:11:03.564 19:09:13 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:03.564 19:09:13 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:03.564 19:09:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:03.564 ************************************ 00:11:03.564 START TEST raid_read_error_test 00:11:03.564 ************************************ 00:11:03.564 19:09:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 3 read 00:11:03.564 19:09:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:11:03.564 19:09:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:11:03.564 19:09:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:11:03.564 19:09:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:03.564 19:09:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:03.564 19:09:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:03.564 19:09:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:03.564 19:09:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:03.564 19:09:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:03.564 19:09:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:03.564 19:09:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:03.564 19:09:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:03.564 19:09:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:03.564 19:09:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:03.564 19:09:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:03.564 19:09:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:03.564 19:09:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:03.564 19:09:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:03.564 19:09:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:03.564 19:09:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:03.564 19:09:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:03.564 19:09:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:11:03.564 19:09:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:11:03.564 19:09:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:03.564 19:09:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.8XU8be6bHk 00:11:03.564 19:09:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=69187 00:11:03.564 19:09:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:03.564 19:09:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 69187 00:11:03.564 19:09:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 69187 ']' 00:11:03.564 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:03.564 19:09:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:03.564 19:09:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:03.564 19:09:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:03.564 19:09:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:03.564 19:09:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.564 [2024-11-27 19:09:13.143098] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:11:03.564 [2024-11-27 19:09:13.143218] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69187 ] 00:11:03.824 [2024-11-27 19:09:13.318419] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:04.083 [2024-11-27 19:09:13.459318] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:04.083 [2024-11-27 19:09:13.711090] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:04.083 [2024-11-27 19:09:13.711149] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:04.343 19:09:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:04.343 19:09:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:04.343 19:09:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:04.343 19:09:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:04.343 19:09:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.343 19:09:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.603 BaseBdev1_malloc 00:11:04.603 19:09:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.603 19:09:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:04.603 19:09:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.603 19:09:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.603 true 00:11:04.603 19:09:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.603 19:09:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:04.603 19:09:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.603 19:09:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.603 [2024-11-27 19:09:14.040705] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:04.603 [2024-11-27 19:09:14.040816] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:04.603 [2024-11-27 19:09:14.040869] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:04.603 [2024-11-27 19:09:14.040902] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:04.603 [2024-11-27 19:09:14.043350] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:04.603 [2024-11-27 19:09:14.043431] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:04.603 BaseBdev1 00:11:04.603 19:09:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.603 19:09:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:04.603 19:09:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:04.603 19:09:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.603 19:09:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.603 BaseBdev2_malloc 00:11:04.603 19:09:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.603 19:09:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:04.603 19:09:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.603 19:09:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.603 true 00:11:04.603 19:09:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.603 19:09:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:04.603 19:09:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.603 19:09:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.603 [2024-11-27 19:09:14.117366] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:04.603 [2024-11-27 19:09:14.117433] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:04.603 [2024-11-27 19:09:14.117452] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:04.603 [2024-11-27 19:09:14.117463] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:04.603 [2024-11-27 19:09:14.119992] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:04.603 [2024-11-27 19:09:14.120035] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:04.603 BaseBdev2 00:11:04.603 19:09:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.603 19:09:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:04.603 19:09:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:04.603 19:09:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.603 19:09:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.603 BaseBdev3_malloc 00:11:04.603 19:09:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.603 19:09:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:04.603 19:09:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.603 19:09:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.603 true 00:11:04.603 19:09:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.603 19:09:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:04.603 19:09:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.603 19:09:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.603 [2024-11-27 19:09:14.206005] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:04.603 [2024-11-27 19:09:14.206066] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:04.603 [2024-11-27 19:09:14.206087] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:04.603 [2024-11-27 19:09:14.206099] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:04.603 [2024-11-27 19:09:14.208633] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:04.603 [2024-11-27 19:09:14.208677] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:04.603 BaseBdev3 00:11:04.603 19:09:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.603 19:09:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:11:04.603 19:09:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.603 19:09:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.603 [2024-11-27 19:09:14.218061] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:04.603 [2024-11-27 19:09:14.220366] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:04.603 [2024-11-27 19:09:14.220522] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:04.603 [2024-11-27 19:09:14.220788] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:04.603 [2024-11-27 19:09:14.220838] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:04.603 [2024-11-27 19:09:14.221121] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:11:04.603 [2024-11-27 19:09:14.221347] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:04.603 [2024-11-27 19:09:14.221391] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:11:04.603 [2024-11-27 19:09:14.221576] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:04.603 19:09:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.603 19:09:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:04.603 19:09:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:04.603 19:09:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:04.603 19:09:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:04.603 19:09:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:04.603 19:09:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:04.603 19:09:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:04.603 19:09:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:04.603 19:09:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:04.603 19:09:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:04.603 19:09:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:04.603 19:09:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.603 19:09:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:04.603 19:09:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.863 19:09:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.863 19:09:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:04.863 "name": "raid_bdev1", 00:11:04.863 "uuid": "00a45190-d256-4a10-aca8-dd2f1628ca32", 00:11:04.863 "strip_size_kb": 0, 00:11:04.863 "state": "online", 00:11:04.863 "raid_level": "raid1", 00:11:04.863 "superblock": true, 00:11:04.863 "num_base_bdevs": 3, 00:11:04.863 "num_base_bdevs_discovered": 3, 00:11:04.863 "num_base_bdevs_operational": 3, 00:11:04.863 "base_bdevs_list": [ 00:11:04.863 { 00:11:04.863 "name": "BaseBdev1", 00:11:04.863 "uuid": "34faea3b-1d89-597a-b11b-0acf889303fe", 00:11:04.863 "is_configured": true, 00:11:04.863 "data_offset": 2048, 00:11:04.863 "data_size": 63488 00:11:04.863 }, 00:11:04.863 { 00:11:04.863 "name": "BaseBdev2", 00:11:04.863 "uuid": "01103d21-363e-5e74-a00c-f21122dcb998", 00:11:04.863 "is_configured": true, 00:11:04.863 "data_offset": 2048, 00:11:04.863 "data_size": 63488 00:11:04.863 }, 00:11:04.863 { 00:11:04.863 "name": "BaseBdev3", 00:11:04.863 "uuid": "e7f01be1-66a4-52d6-bf2a-d7502771b1fc", 00:11:04.863 "is_configured": true, 00:11:04.863 "data_offset": 2048, 00:11:04.863 "data_size": 63488 00:11:04.863 } 00:11:04.863 ] 00:11:04.863 }' 00:11:04.863 19:09:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:04.863 19:09:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.122 19:09:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:05.122 19:09:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:05.122 [2024-11-27 19:09:14.738651] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:11:06.060 19:09:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:11:06.060 19:09:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.060 19:09:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.060 19:09:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.060 19:09:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:06.060 19:09:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:11:06.060 19:09:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:11:06.060 19:09:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:11:06.060 19:09:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:06.060 19:09:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:06.060 19:09:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:06.060 19:09:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:06.060 19:09:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:06.060 19:09:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:06.060 19:09:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:06.060 19:09:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:06.061 19:09:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:06.061 19:09:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:06.061 19:09:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:06.061 19:09:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:06.061 19:09:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.061 19:09:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.061 19:09:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.321 19:09:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:06.321 "name": "raid_bdev1", 00:11:06.321 "uuid": "00a45190-d256-4a10-aca8-dd2f1628ca32", 00:11:06.321 "strip_size_kb": 0, 00:11:06.321 "state": "online", 00:11:06.321 "raid_level": "raid1", 00:11:06.321 "superblock": true, 00:11:06.321 "num_base_bdevs": 3, 00:11:06.321 "num_base_bdevs_discovered": 3, 00:11:06.321 "num_base_bdevs_operational": 3, 00:11:06.321 "base_bdevs_list": [ 00:11:06.321 { 00:11:06.321 "name": "BaseBdev1", 00:11:06.321 "uuid": "34faea3b-1d89-597a-b11b-0acf889303fe", 00:11:06.321 "is_configured": true, 00:11:06.321 "data_offset": 2048, 00:11:06.321 "data_size": 63488 00:11:06.321 }, 00:11:06.321 { 00:11:06.321 "name": "BaseBdev2", 00:11:06.321 "uuid": "01103d21-363e-5e74-a00c-f21122dcb998", 00:11:06.321 "is_configured": true, 00:11:06.321 "data_offset": 2048, 00:11:06.321 "data_size": 63488 00:11:06.321 }, 00:11:06.321 { 00:11:06.321 "name": "BaseBdev3", 00:11:06.321 "uuid": "e7f01be1-66a4-52d6-bf2a-d7502771b1fc", 00:11:06.321 "is_configured": true, 00:11:06.321 "data_offset": 2048, 00:11:06.321 "data_size": 63488 00:11:06.321 } 00:11:06.321 ] 00:11:06.321 }' 00:11:06.321 19:09:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:06.321 19:09:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.580 19:09:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:06.580 19:09:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.580 19:09:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.580 [2024-11-27 19:09:16.083320] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:06.580 [2024-11-27 19:09:16.083405] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:06.580 [2024-11-27 19:09:16.086292] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:06.580 [2024-11-27 19:09:16.086387] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:06.580 [2024-11-27 19:09:16.086520] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:06.580 [2024-11-27 19:09:16.086532] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:11:06.580 { 00:11:06.580 "results": [ 00:11:06.580 { 00:11:06.580 "job": "raid_bdev1", 00:11:06.580 "core_mask": "0x1", 00:11:06.580 "workload": "randrw", 00:11:06.580 "percentage": 50, 00:11:06.580 "status": "finished", 00:11:06.580 "queue_depth": 1, 00:11:06.580 "io_size": 131072, 00:11:06.580 "runtime": 1.345259, 00:11:06.580 "iops": 9835.280789795868, 00:11:06.580 "mibps": 1229.4100987244835, 00:11:06.580 "io_failed": 0, 00:11:06.580 "io_timeout": 0, 00:11:06.580 "avg_latency_us": 99.0077052733441, 00:11:06.580 "min_latency_us": 23.811353711790392, 00:11:06.580 "max_latency_us": 1495.3082969432314 00:11:06.580 } 00:11:06.580 ], 00:11:06.580 "core_count": 1 00:11:06.580 } 00:11:06.580 19:09:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.580 19:09:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 69187 00:11:06.580 19:09:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 69187 ']' 00:11:06.580 19:09:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 69187 00:11:06.580 19:09:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:11:06.580 19:09:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:06.580 19:09:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69187 00:11:06.580 19:09:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:06.580 19:09:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:06.580 19:09:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69187' 00:11:06.580 killing process with pid 69187 00:11:06.580 19:09:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 69187 00:11:06.580 [2024-11-27 19:09:16.135616] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:06.580 19:09:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 69187 00:11:06.840 [2024-11-27 19:09:16.390096] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:08.222 19:09:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.8XU8be6bHk 00:11:08.222 19:09:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:08.222 19:09:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:08.222 19:09:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:11:08.222 19:09:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:11:08.222 19:09:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:08.222 19:09:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:08.222 19:09:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:11:08.222 00:11:08.222 real 0m4.651s 00:11:08.222 user 0m5.318s 00:11:08.222 sys 0m0.707s 00:11:08.222 ************************************ 00:11:08.222 END TEST raid_read_error_test 00:11:08.222 ************************************ 00:11:08.222 19:09:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:08.222 19:09:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.222 19:09:17 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 3 write 00:11:08.222 19:09:17 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:08.222 19:09:17 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:08.222 19:09:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:08.223 ************************************ 00:11:08.223 START TEST raid_write_error_test 00:11:08.223 ************************************ 00:11:08.223 19:09:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 3 write 00:11:08.223 19:09:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:11:08.223 19:09:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:11:08.223 19:09:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:11:08.223 19:09:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:08.223 19:09:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:08.223 19:09:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:08.223 19:09:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:08.223 19:09:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:08.223 19:09:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:08.223 19:09:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:08.223 19:09:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:08.223 19:09:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:08.223 19:09:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:08.223 19:09:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:08.223 19:09:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:08.223 19:09:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:08.223 19:09:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:08.223 19:09:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:08.223 19:09:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:08.223 19:09:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:08.223 19:09:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:08.223 19:09:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:11:08.223 19:09:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:11:08.223 19:09:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:08.223 19:09:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.WJ7n0Bt1uC 00:11:08.223 19:09:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=69327 00:11:08.223 19:09:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 69327 00:11:08.223 19:09:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:08.223 19:09:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 69327 ']' 00:11:08.223 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:08.223 19:09:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:08.223 19:09:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:08.223 19:09:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:08.223 19:09:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:08.223 19:09:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.483 [2024-11-27 19:09:17.867916] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:11:08.483 [2024-11-27 19:09:17.868038] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69327 ] 00:11:08.483 [2024-11-27 19:09:18.047080] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:08.743 [2024-11-27 19:09:18.184376] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:09.003 [2024-11-27 19:09:18.419539] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:09.003 [2024-11-27 19:09:18.419600] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:09.263 19:09:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:09.263 19:09:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:09.263 19:09:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:09.263 19:09:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:09.263 19:09:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.263 19:09:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.263 BaseBdev1_malloc 00:11:09.263 19:09:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.263 19:09:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:09.263 19:09:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.263 19:09:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.263 true 00:11:09.263 19:09:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.263 19:09:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:09.263 19:09:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.263 19:09:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.263 [2024-11-27 19:09:18.763994] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:09.263 [2024-11-27 19:09:18.764122] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:09.264 [2024-11-27 19:09:18.764164] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:09.264 [2024-11-27 19:09:18.764200] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:09.264 [2024-11-27 19:09:18.766656] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:09.264 [2024-11-27 19:09:18.766742] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:09.264 BaseBdev1 00:11:09.264 19:09:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.264 19:09:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:09.264 19:09:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:09.264 19:09:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.264 19:09:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.264 BaseBdev2_malloc 00:11:09.264 19:09:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.264 19:09:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:09.264 19:09:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.264 19:09:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.264 true 00:11:09.264 19:09:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.264 19:09:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:09.264 19:09:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.264 19:09:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.264 [2024-11-27 19:09:18.837266] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:09.264 [2024-11-27 19:09:18.837385] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:09.264 [2024-11-27 19:09:18.837421] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:09.264 [2024-11-27 19:09:18.837470] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:09.264 [2024-11-27 19:09:18.839934] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:09.264 [2024-11-27 19:09:18.840011] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:09.264 BaseBdev2 00:11:09.264 19:09:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.264 19:09:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:09.264 19:09:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:09.264 19:09:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.264 19:09:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.523 BaseBdev3_malloc 00:11:09.523 19:09:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.523 19:09:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:09.523 19:09:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.523 19:09:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.523 true 00:11:09.523 19:09:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.523 19:09:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:09.523 19:09:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.523 19:09:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.523 [2024-11-27 19:09:18.921660] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:09.523 [2024-11-27 19:09:18.921776] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:09.523 [2024-11-27 19:09:18.921814] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:09.524 [2024-11-27 19:09:18.921846] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:09.524 [2024-11-27 19:09:18.924385] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:09.524 [2024-11-27 19:09:18.924465] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:09.524 BaseBdev3 00:11:09.524 19:09:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.524 19:09:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:11:09.524 19:09:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.524 19:09:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.524 [2024-11-27 19:09:18.933727] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:09.524 [2024-11-27 19:09:18.935861] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:09.524 [2024-11-27 19:09:18.935939] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:09.524 [2024-11-27 19:09:18.936151] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:09.524 [2024-11-27 19:09:18.936165] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:09.524 [2024-11-27 19:09:18.936409] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:11:09.524 [2024-11-27 19:09:18.936609] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:09.524 [2024-11-27 19:09:18.936621] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:11:09.524 [2024-11-27 19:09:18.936792] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:09.524 19:09:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.524 19:09:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:09.524 19:09:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:09.524 19:09:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:09.524 19:09:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:09.524 19:09:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:09.524 19:09:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:09.524 19:09:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:09.524 19:09:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:09.524 19:09:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:09.524 19:09:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:09.524 19:09:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:09.524 19:09:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:09.524 19:09:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.524 19:09:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.524 19:09:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.524 19:09:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:09.524 "name": "raid_bdev1", 00:11:09.524 "uuid": "57c8bf81-18f6-427b-9569-c5c2a2fdd70b", 00:11:09.524 "strip_size_kb": 0, 00:11:09.524 "state": "online", 00:11:09.524 "raid_level": "raid1", 00:11:09.524 "superblock": true, 00:11:09.524 "num_base_bdevs": 3, 00:11:09.524 "num_base_bdevs_discovered": 3, 00:11:09.524 "num_base_bdevs_operational": 3, 00:11:09.524 "base_bdevs_list": [ 00:11:09.524 { 00:11:09.524 "name": "BaseBdev1", 00:11:09.524 "uuid": "a1fec8a0-6395-5094-99f3-6f84abe9d0a8", 00:11:09.524 "is_configured": true, 00:11:09.524 "data_offset": 2048, 00:11:09.524 "data_size": 63488 00:11:09.524 }, 00:11:09.524 { 00:11:09.524 "name": "BaseBdev2", 00:11:09.524 "uuid": "cada532e-fb88-55e2-98ac-a3eb27f1e60f", 00:11:09.524 "is_configured": true, 00:11:09.524 "data_offset": 2048, 00:11:09.524 "data_size": 63488 00:11:09.524 }, 00:11:09.524 { 00:11:09.524 "name": "BaseBdev3", 00:11:09.524 "uuid": "043067b6-2fa2-541c-ba89-433370267bc4", 00:11:09.524 "is_configured": true, 00:11:09.524 "data_offset": 2048, 00:11:09.524 "data_size": 63488 00:11:09.524 } 00:11:09.524 ] 00:11:09.524 }' 00:11:09.524 19:09:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:09.524 19:09:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.783 19:09:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:09.783 19:09:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:10.042 [2024-11-27 19:09:19.450394] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:11:10.981 19:09:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:11:10.981 19:09:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.981 19:09:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.982 [2024-11-27 19:09:20.386299] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:11:10.982 [2024-11-27 19:09:20.386429] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:10.982 [2024-11-27 19:09:20.386716] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006700 00:11:10.982 19:09:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.982 19:09:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:10.982 19:09:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:11:10.982 19:09:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:11:10.982 19:09:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:11:10.982 19:09:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:10.982 19:09:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:10.982 19:09:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:10.982 19:09:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:10.982 19:09:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:10.982 19:09:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:10.982 19:09:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:10.982 19:09:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:10.982 19:09:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:10.982 19:09:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:10.982 19:09:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:10.982 19:09:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.982 19:09:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:10.982 19:09:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.982 19:09:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.982 19:09:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:10.982 "name": "raid_bdev1", 00:11:10.982 "uuid": "57c8bf81-18f6-427b-9569-c5c2a2fdd70b", 00:11:10.982 "strip_size_kb": 0, 00:11:10.982 "state": "online", 00:11:10.982 "raid_level": "raid1", 00:11:10.982 "superblock": true, 00:11:10.982 "num_base_bdevs": 3, 00:11:10.982 "num_base_bdevs_discovered": 2, 00:11:10.982 "num_base_bdevs_operational": 2, 00:11:10.982 "base_bdevs_list": [ 00:11:10.982 { 00:11:10.982 "name": null, 00:11:10.982 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:10.982 "is_configured": false, 00:11:10.982 "data_offset": 0, 00:11:10.982 "data_size": 63488 00:11:10.982 }, 00:11:10.982 { 00:11:10.982 "name": "BaseBdev2", 00:11:10.982 "uuid": "cada532e-fb88-55e2-98ac-a3eb27f1e60f", 00:11:10.982 "is_configured": true, 00:11:10.982 "data_offset": 2048, 00:11:10.982 "data_size": 63488 00:11:10.982 }, 00:11:10.982 { 00:11:10.982 "name": "BaseBdev3", 00:11:10.982 "uuid": "043067b6-2fa2-541c-ba89-433370267bc4", 00:11:10.982 "is_configured": true, 00:11:10.982 "data_offset": 2048, 00:11:10.982 "data_size": 63488 00:11:10.982 } 00:11:10.982 ] 00:11:10.982 }' 00:11:10.982 19:09:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:10.982 19:09:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.242 19:09:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:11.242 19:09:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.242 19:09:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.242 [2024-11-27 19:09:20.825618] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:11.242 [2024-11-27 19:09:20.825721] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:11.242 [2024-11-27 19:09:20.828579] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:11.242 [2024-11-27 19:09:20.828704] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:11.242 [2024-11-27 19:09:20.828813] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:11.242 [2024-11-27 19:09:20.828855] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:11:11.242 { 00:11:11.242 "results": [ 00:11:11.242 { 00:11:11.242 "job": "raid_bdev1", 00:11:11.242 "core_mask": "0x1", 00:11:11.242 "workload": "randrw", 00:11:11.242 "percentage": 50, 00:11:11.242 "status": "finished", 00:11:11.242 "queue_depth": 1, 00:11:11.242 "io_size": 131072, 00:11:11.242 "runtime": 1.375849, 00:11:11.242 "iops": 11240.332332981308, 00:11:11.242 "mibps": 1405.0415416226635, 00:11:11.242 "io_failed": 0, 00:11:11.242 "io_timeout": 0, 00:11:11.242 "avg_latency_us": 86.30235689265945, 00:11:11.242 "min_latency_us": 23.475982532751093, 00:11:11.242 "max_latency_us": 1380.8349344978167 00:11:11.242 } 00:11:11.242 ], 00:11:11.242 "core_count": 1 00:11:11.242 } 00:11:11.242 19:09:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.242 19:09:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 69327 00:11:11.242 19:09:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 69327 ']' 00:11:11.242 19:09:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 69327 00:11:11.242 19:09:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:11:11.242 19:09:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:11.242 19:09:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69327 00:11:11.242 19:09:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:11.242 19:09:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:11.242 19:09:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69327' 00:11:11.242 killing process with pid 69327 00:11:11.243 19:09:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 69327 00:11:11.243 [2024-11-27 19:09:20.876028] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:11.243 19:09:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 69327 00:11:11.504 [2024-11-27 19:09:21.128379] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:12.891 19:09:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.WJ7n0Bt1uC 00:11:12.891 19:09:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:12.891 19:09:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:12.891 19:09:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:11:12.891 19:09:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:11:12.891 19:09:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:12.891 19:09:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:12.891 19:09:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:11:12.891 00:11:12.891 real 0m4.682s 00:11:12.891 user 0m5.374s 00:11:12.891 sys 0m0.678s 00:11:12.891 19:09:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:12.891 ************************************ 00:11:12.891 END TEST raid_write_error_test 00:11:12.891 ************************************ 00:11:12.891 19:09:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.891 19:09:22 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:11:12.891 19:09:22 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:11:12.891 19:09:22 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:11:12.891 19:09:22 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:12.891 19:09:22 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:12.891 19:09:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:12.891 ************************************ 00:11:12.891 START TEST raid_state_function_test 00:11:12.891 ************************************ 00:11:12.891 19:09:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 4 false 00:11:12.891 19:09:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:11:12.891 19:09:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:11:12.891 19:09:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:11:12.891 19:09:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:12.891 19:09:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:12.891 19:09:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:12.891 19:09:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:12.891 19:09:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:12.891 19:09:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:12.892 19:09:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:12.892 19:09:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:12.892 19:09:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:12.892 19:09:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:12.892 19:09:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:12.892 19:09:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:12.892 19:09:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:11:12.892 19:09:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:12.892 19:09:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:12.892 19:09:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:12.892 19:09:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:12.892 19:09:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:12.892 19:09:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:12.892 19:09:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:12.892 19:09:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:12.892 19:09:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:11:12.892 19:09:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:11:12.892 19:09:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:11:12.892 19:09:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:11:12.892 19:09:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:11:12.892 19:09:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=69476 00:11:12.892 19:09:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:12.892 19:09:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 69476' 00:11:12.892 Process raid pid: 69476 00:11:12.892 19:09:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 69476 00:11:13.151 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:13.151 19:09:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 69476 ']' 00:11:13.151 19:09:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:13.151 19:09:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:13.151 19:09:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:13.151 19:09:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:13.151 19:09:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.151 [2024-11-27 19:09:22.617252] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:11:13.151 [2024-11-27 19:09:22.617392] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:13.151 [2024-11-27 19:09:22.781845] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:13.411 [2024-11-27 19:09:22.923882] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:13.671 [2024-11-27 19:09:23.163714] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:13.671 [2024-11-27 19:09:23.163761] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:13.931 19:09:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:13.931 19:09:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:11:13.931 19:09:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:13.931 19:09:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.931 19:09:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.931 [2024-11-27 19:09:23.463059] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:13.931 [2024-11-27 19:09:23.463144] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:13.931 [2024-11-27 19:09:23.463156] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:13.931 [2024-11-27 19:09:23.463166] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:13.931 [2024-11-27 19:09:23.463173] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:13.931 [2024-11-27 19:09:23.463183] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:13.931 [2024-11-27 19:09:23.463190] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:13.931 [2024-11-27 19:09:23.463199] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:13.931 19:09:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.931 19:09:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:13.931 19:09:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:13.931 19:09:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:13.931 19:09:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:13.931 19:09:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:13.931 19:09:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:13.931 19:09:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:13.931 19:09:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:13.931 19:09:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:13.931 19:09:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:13.931 19:09:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:13.931 19:09:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.931 19:09:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.931 19:09:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:13.931 19:09:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.931 19:09:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:13.931 "name": "Existed_Raid", 00:11:13.931 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:13.931 "strip_size_kb": 64, 00:11:13.931 "state": "configuring", 00:11:13.931 "raid_level": "raid0", 00:11:13.931 "superblock": false, 00:11:13.931 "num_base_bdevs": 4, 00:11:13.931 "num_base_bdevs_discovered": 0, 00:11:13.931 "num_base_bdevs_operational": 4, 00:11:13.931 "base_bdevs_list": [ 00:11:13.931 { 00:11:13.931 "name": "BaseBdev1", 00:11:13.931 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:13.931 "is_configured": false, 00:11:13.931 "data_offset": 0, 00:11:13.931 "data_size": 0 00:11:13.931 }, 00:11:13.931 { 00:11:13.931 "name": "BaseBdev2", 00:11:13.931 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:13.931 "is_configured": false, 00:11:13.931 "data_offset": 0, 00:11:13.931 "data_size": 0 00:11:13.931 }, 00:11:13.931 { 00:11:13.931 "name": "BaseBdev3", 00:11:13.931 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:13.931 "is_configured": false, 00:11:13.931 "data_offset": 0, 00:11:13.931 "data_size": 0 00:11:13.931 }, 00:11:13.931 { 00:11:13.931 "name": "BaseBdev4", 00:11:13.931 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:13.931 "is_configured": false, 00:11:13.931 "data_offset": 0, 00:11:13.931 "data_size": 0 00:11:13.931 } 00:11:13.931 ] 00:11:13.931 }' 00:11:13.931 19:09:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:13.931 19:09:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.501 19:09:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:14.501 19:09:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.501 19:09:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.501 [2024-11-27 19:09:23.946254] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:14.501 [2024-11-27 19:09:23.946380] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:14.501 19:09:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.501 19:09:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:14.501 19:09:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.501 19:09:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.501 [2024-11-27 19:09:23.958182] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:14.501 [2024-11-27 19:09:23.958269] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:14.501 [2024-11-27 19:09:23.958297] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:14.501 [2024-11-27 19:09:23.958320] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:14.501 [2024-11-27 19:09:23.958338] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:14.501 [2024-11-27 19:09:23.958360] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:14.501 [2024-11-27 19:09:23.958377] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:14.501 [2024-11-27 19:09:23.958398] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:14.501 19:09:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.501 19:09:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:14.501 19:09:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.501 19:09:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.501 [2024-11-27 19:09:24.013467] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:14.501 BaseBdev1 00:11:14.501 19:09:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.501 19:09:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:14.501 19:09:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:14.501 19:09:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:14.501 19:09:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:14.501 19:09:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:14.501 19:09:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:14.501 19:09:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:14.501 19:09:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.501 19:09:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.501 19:09:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.501 19:09:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:14.501 19:09:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.501 19:09:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.501 [ 00:11:14.501 { 00:11:14.501 "name": "BaseBdev1", 00:11:14.501 "aliases": [ 00:11:14.501 "d87a43be-9bae-4b43-a03b-33b7d2788908" 00:11:14.501 ], 00:11:14.501 "product_name": "Malloc disk", 00:11:14.501 "block_size": 512, 00:11:14.501 "num_blocks": 65536, 00:11:14.501 "uuid": "d87a43be-9bae-4b43-a03b-33b7d2788908", 00:11:14.501 "assigned_rate_limits": { 00:11:14.501 "rw_ios_per_sec": 0, 00:11:14.501 "rw_mbytes_per_sec": 0, 00:11:14.501 "r_mbytes_per_sec": 0, 00:11:14.501 "w_mbytes_per_sec": 0 00:11:14.502 }, 00:11:14.502 "claimed": true, 00:11:14.502 "claim_type": "exclusive_write", 00:11:14.502 "zoned": false, 00:11:14.502 "supported_io_types": { 00:11:14.502 "read": true, 00:11:14.502 "write": true, 00:11:14.502 "unmap": true, 00:11:14.502 "flush": true, 00:11:14.502 "reset": true, 00:11:14.502 "nvme_admin": false, 00:11:14.502 "nvme_io": false, 00:11:14.502 "nvme_io_md": false, 00:11:14.502 "write_zeroes": true, 00:11:14.502 "zcopy": true, 00:11:14.502 "get_zone_info": false, 00:11:14.502 "zone_management": false, 00:11:14.502 "zone_append": false, 00:11:14.502 "compare": false, 00:11:14.502 "compare_and_write": false, 00:11:14.502 "abort": true, 00:11:14.502 "seek_hole": false, 00:11:14.502 "seek_data": false, 00:11:14.502 "copy": true, 00:11:14.502 "nvme_iov_md": false 00:11:14.502 }, 00:11:14.502 "memory_domains": [ 00:11:14.502 { 00:11:14.502 "dma_device_id": "system", 00:11:14.502 "dma_device_type": 1 00:11:14.502 }, 00:11:14.502 { 00:11:14.502 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:14.502 "dma_device_type": 2 00:11:14.502 } 00:11:14.502 ], 00:11:14.502 "driver_specific": {} 00:11:14.502 } 00:11:14.502 ] 00:11:14.502 19:09:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.502 19:09:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:14.502 19:09:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:14.502 19:09:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:14.502 19:09:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:14.502 19:09:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:14.502 19:09:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:14.502 19:09:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:14.502 19:09:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:14.502 19:09:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:14.502 19:09:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:14.502 19:09:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:14.502 19:09:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:14.502 19:09:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.502 19:09:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:14.502 19:09:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.502 19:09:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.502 19:09:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:14.502 "name": "Existed_Raid", 00:11:14.502 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:14.502 "strip_size_kb": 64, 00:11:14.502 "state": "configuring", 00:11:14.502 "raid_level": "raid0", 00:11:14.502 "superblock": false, 00:11:14.502 "num_base_bdevs": 4, 00:11:14.502 "num_base_bdevs_discovered": 1, 00:11:14.502 "num_base_bdevs_operational": 4, 00:11:14.502 "base_bdevs_list": [ 00:11:14.502 { 00:11:14.502 "name": "BaseBdev1", 00:11:14.502 "uuid": "d87a43be-9bae-4b43-a03b-33b7d2788908", 00:11:14.502 "is_configured": true, 00:11:14.502 "data_offset": 0, 00:11:14.502 "data_size": 65536 00:11:14.502 }, 00:11:14.502 { 00:11:14.502 "name": "BaseBdev2", 00:11:14.502 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:14.502 "is_configured": false, 00:11:14.502 "data_offset": 0, 00:11:14.502 "data_size": 0 00:11:14.502 }, 00:11:14.502 { 00:11:14.502 "name": "BaseBdev3", 00:11:14.502 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:14.502 "is_configured": false, 00:11:14.502 "data_offset": 0, 00:11:14.502 "data_size": 0 00:11:14.502 }, 00:11:14.502 { 00:11:14.502 "name": "BaseBdev4", 00:11:14.502 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:14.502 "is_configured": false, 00:11:14.502 "data_offset": 0, 00:11:14.502 "data_size": 0 00:11:14.502 } 00:11:14.502 ] 00:11:14.502 }' 00:11:14.502 19:09:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:14.502 19:09:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.072 19:09:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:15.072 19:09:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.072 19:09:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.072 [2024-11-27 19:09:24.524667] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:15.072 [2024-11-27 19:09:24.524758] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:15.072 19:09:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.072 19:09:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:15.072 19:09:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.072 19:09:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.072 [2024-11-27 19:09:24.536685] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:15.072 [2024-11-27 19:09:24.538929] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:15.072 [2024-11-27 19:09:24.539022] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:15.072 [2024-11-27 19:09:24.539039] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:15.072 [2024-11-27 19:09:24.539050] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:15.072 [2024-11-27 19:09:24.539057] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:15.072 [2024-11-27 19:09:24.539066] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:15.072 19:09:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.072 19:09:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:15.072 19:09:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:15.072 19:09:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:15.072 19:09:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:15.072 19:09:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:15.072 19:09:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:15.072 19:09:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:15.072 19:09:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:15.072 19:09:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:15.072 19:09:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:15.072 19:09:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:15.072 19:09:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:15.072 19:09:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:15.072 19:09:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:15.072 19:09:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.072 19:09:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.072 19:09:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.072 19:09:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:15.072 "name": "Existed_Raid", 00:11:15.072 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:15.072 "strip_size_kb": 64, 00:11:15.072 "state": "configuring", 00:11:15.072 "raid_level": "raid0", 00:11:15.072 "superblock": false, 00:11:15.072 "num_base_bdevs": 4, 00:11:15.072 "num_base_bdevs_discovered": 1, 00:11:15.072 "num_base_bdevs_operational": 4, 00:11:15.072 "base_bdevs_list": [ 00:11:15.072 { 00:11:15.072 "name": "BaseBdev1", 00:11:15.072 "uuid": "d87a43be-9bae-4b43-a03b-33b7d2788908", 00:11:15.072 "is_configured": true, 00:11:15.072 "data_offset": 0, 00:11:15.072 "data_size": 65536 00:11:15.072 }, 00:11:15.072 { 00:11:15.072 "name": "BaseBdev2", 00:11:15.072 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:15.072 "is_configured": false, 00:11:15.072 "data_offset": 0, 00:11:15.072 "data_size": 0 00:11:15.072 }, 00:11:15.072 { 00:11:15.072 "name": "BaseBdev3", 00:11:15.072 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:15.072 "is_configured": false, 00:11:15.072 "data_offset": 0, 00:11:15.072 "data_size": 0 00:11:15.072 }, 00:11:15.072 { 00:11:15.072 "name": "BaseBdev4", 00:11:15.072 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:15.072 "is_configured": false, 00:11:15.072 "data_offset": 0, 00:11:15.072 "data_size": 0 00:11:15.072 } 00:11:15.072 ] 00:11:15.072 }' 00:11:15.072 19:09:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:15.072 19:09:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.790 19:09:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:15.790 19:09:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.790 19:09:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.790 [2024-11-27 19:09:25.044285] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:15.790 BaseBdev2 00:11:15.790 19:09:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.790 19:09:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:15.790 19:09:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:15.790 19:09:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:15.790 19:09:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:15.790 19:09:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:15.790 19:09:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:15.790 19:09:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:15.790 19:09:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.790 19:09:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.790 19:09:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.790 19:09:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:15.790 19:09:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.791 19:09:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.791 [ 00:11:15.791 { 00:11:15.791 "name": "BaseBdev2", 00:11:15.791 "aliases": [ 00:11:15.791 "639dd6ed-a4dd-437d-873e-f20bb15eee24" 00:11:15.791 ], 00:11:15.791 "product_name": "Malloc disk", 00:11:15.791 "block_size": 512, 00:11:15.791 "num_blocks": 65536, 00:11:15.791 "uuid": "639dd6ed-a4dd-437d-873e-f20bb15eee24", 00:11:15.791 "assigned_rate_limits": { 00:11:15.791 "rw_ios_per_sec": 0, 00:11:15.791 "rw_mbytes_per_sec": 0, 00:11:15.791 "r_mbytes_per_sec": 0, 00:11:15.791 "w_mbytes_per_sec": 0 00:11:15.791 }, 00:11:15.791 "claimed": true, 00:11:15.791 "claim_type": "exclusive_write", 00:11:15.791 "zoned": false, 00:11:15.791 "supported_io_types": { 00:11:15.791 "read": true, 00:11:15.791 "write": true, 00:11:15.791 "unmap": true, 00:11:15.791 "flush": true, 00:11:15.791 "reset": true, 00:11:15.791 "nvme_admin": false, 00:11:15.791 "nvme_io": false, 00:11:15.791 "nvme_io_md": false, 00:11:15.791 "write_zeroes": true, 00:11:15.791 "zcopy": true, 00:11:15.791 "get_zone_info": false, 00:11:15.791 "zone_management": false, 00:11:15.791 "zone_append": false, 00:11:15.791 "compare": false, 00:11:15.791 "compare_and_write": false, 00:11:15.791 "abort": true, 00:11:15.791 "seek_hole": false, 00:11:15.791 "seek_data": false, 00:11:15.791 "copy": true, 00:11:15.791 "nvme_iov_md": false 00:11:15.791 }, 00:11:15.791 "memory_domains": [ 00:11:15.791 { 00:11:15.791 "dma_device_id": "system", 00:11:15.791 "dma_device_type": 1 00:11:15.791 }, 00:11:15.791 { 00:11:15.791 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:15.791 "dma_device_type": 2 00:11:15.791 } 00:11:15.791 ], 00:11:15.791 "driver_specific": {} 00:11:15.791 } 00:11:15.791 ] 00:11:15.791 19:09:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.791 19:09:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:15.791 19:09:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:15.791 19:09:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:15.791 19:09:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:15.791 19:09:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:15.791 19:09:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:15.791 19:09:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:15.791 19:09:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:15.791 19:09:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:15.791 19:09:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:15.791 19:09:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:15.791 19:09:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:15.791 19:09:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:15.791 19:09:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:15.791 19:09:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:15.791 19:09:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.791 19:09:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.791 19:09:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.791 19:09:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:15.791 "name": "Existed_Raid", 00:11:15.791 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:15.791 "strip_size_kb": 64, 00:11:15.791 "state": "configuring", 00:11:15.791 "raid_level": "raid0", 00:11:15.791 "superblock": false, 00:11:15.791 "num_base_bdevs": 4, 00:11:15.791 "num_base_bdevs_discovered": 2, 00:11:15.791 "num_base_bdevs_operational": 4, 00:11:15.791 "base_bdevs_list": [ 00:11:15.791 { 00:11:15.791 "name": "BaseBdev1", 00:11:15.791 "uuid": "d87a43be-9bae-4b43-a03b-33b7d2788908", 00:11:15.791 "is_configured": true, 00:11:15.791 "data_offset": 0, 00:11:15.791 "data_size": 65536 00:11:15.791 }, 00:11:15.791 { 00:11:15.791 "name": "BaseBdev2", 00:11:15.791 "uuid": "639dd6ed-a4dd-437d-873e-f20bb15eee24", 00:11:15.791 "is_configured": true, 00:11:15.791 "data_offset": 0, 00:11:15.791 "data_size": 65536 00:11:15.791 }, 00:11:15.791 { 00:11:15.791 "name": "BaseBdev3", 00:11:15.791 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:15.791 "is_configured": false, 00:11:15.791 "data_offset": 0, 00:11:15.791 "data_size": 0 00:11:15.791 }, 00:11:15.791 { 00:11:15.791 "name": "BaseBdev4", 00:11:15.791 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:15.791 "is_configured": false, 00:11:15.791 "data_offset": 0, 00:11:15.791 "data_size": 0 00:11:15.791 } 00:11:15.791 ] 00:11:15.791 }' 00:11:15.791 19:09:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:15.791 19:09:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.050 19:09:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:16.050 19:09:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.050 19:09:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.050 [2024-11-27 19:09:25.626993] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:16.050 BaseBdev3 00:11:16.050 19:09:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.050 19:09:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:16.050 19:09:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:16.050 19:09:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:16.050 19:09:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:16.050 19:09:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:16.050 19:09:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:16.050 19:09:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:16.050 19:09:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.050 19:09:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.050 19:09:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.050 19:09:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:16.050 19:09:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.050 19:09:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.050 [ 00:11:16.050 { 00:11:16.050 "name": "BaseBdev3", 00:11:16.050 "aliases": [ 00:11:16.050 "17d7bd54-26e8-4447-83d4-8222ce7c98be" 00:11:16.050 ], 00:11:16.050 "product_name": "Malloc disk", 00:11:16.050 "block_size": 512, 00:11:16.050 "num_blocks": 65536, 00:11:16.050 "uuid": "17d7bd54-26e8-4447-83d4-8222ce7c98be", 00:11:16.050 "assigned_rate_limits": { 00:11:16.050 "rw_ios_per_sec": 0, 00:11:16.050 "rw_mbytes_per_sec": 0, 00:11:16.050 "r_mbytes_per_sec": 0, 00:11:16.050 "w_mbytes_per_sec": 0 00:11:16.050 }, 00:11:16.050 "claimed": true, 00:11:16.050 "claim_type": "exclusive_write", 00:11:16.050 "zoned": false, 00:11:16.050 "supported_io_types": { 00:11:16.050 "read": true, 00:11:16.050 "write": true, 00:11:16.050 "unmap": true, 00:11:16.050 "flush": true, 00:11:16.050 "reset": true, 00:11:16.050 "nvme_admin": false, 00:11:16.050 "nvme_io": false, 00:11:16.050 "nvme_io_md": false, 00:11:16.050 "write_zeroes": true, 00:11:16.050 "zcopy": true, 00:11:16.050 "get_zone_info": false, 00:11:16.050 "zone_management": false, 00:11:16.050 "zone_append": false, 00:11:16.050 "compare": false, 00:11:16.050 "compare_and_write": false, 00:11:16.050 "abort": true, 00:11:16.050 "seek_hole": false, 00:11:16.050 "seek_data": false, 00:11:16.050 "copy": true, 00:11:16.050 "nvme_iov_md": false 00:11:16.050 }, 00:11:16.050 "memory_domains": [ 00:11:16.050 { 00:11:16.050 "dma_device_id": "system", 00:11:16.050 "dma_device_type": 1 00:11:16.050 }, 00:11:16.050 { 00:11:16.050 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:16.050 "dma_device_type": 2 00:11:16.050 } 00:11:16.050 ], 00:11:16.050 "driver_specific": {} 00:11:16.050 } 00:11:16.050 ] 00:11:16.050 19:09:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.050 19:09:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:16.050 19:09:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:16.050 19:09:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:16.050 19:09:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:16.050 19:09:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:16.050 19:09:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:16.050 19:09:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:16.050 19:09:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:16.050 19:09:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:16.051 19:09:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:16.051 19:09:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:16.051 19:09:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:16.051 19:09:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:16.051 19:09:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:16.051 19:09:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:16.051 19:09:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.051 19:09:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.309 19:09:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.309 19:09:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:16.309 "name": "Existed_Raid", 00:11:16.309 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:16.309 "strip_size_kb": 64, 00:11:16.309 "state": "configuring", 00:11:16.309 "raid_level": "raid0", 00:11:16.309 "superblock": false, 00:11:16.309 "num_base_bdevs": 4, 00:11:16.309 "num_base_bdevs_discovered": 3, 00:11:16.309 "num_base_bdevs_operational": 4, 00:11:16.309 "base_bdevs_list": [ 00:11:16.309 { 00:11:16.309 "name": "BaseBdev1", 00:11:16.309 "uuid": "d87a43be-9bae-4b43-a03b-33b7d2788908", 00:11:16.309 "is_configured": true, 00:11:16.309 "data_offset": 0, 00:11:16.309 "data_size": 65536 00:11:16.309 }, 00:11:16.309 { 00:11:16.309 "name": "BaseBdev2", 00:11:16.309 "uuid": "639dd6ed-a4dd-437d-873e-f20bb15eee24", 00:11:16.309 "is_configured": true, 00:11:16.309 "data_offset": 0, 00:11:16.309 "data_size": 65536 00:11:16.309 }, 00:11:16.309 { 00:11:16.309 "name": "BaseBdev3", 00:11:16.309 "uuid": "17d7bd54-26e8-4447-83d4-8222ce7c98be", 00:11:16.309 "is_configured": true, 00:11:16.309 "data_offset": 0, 00:11:16.309 "data_size": 65536 00:11:16.309 }, 00:11:16.309 { 00:11:16.309 "name": "BaseBdev4", 00:11:16.309 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:16.309 "is_configured": false, 00:11:16.310 "data_offset": 0, 00:11:16.310 "data_size": 0 00:11:16.310 } 00:11:16.310 ] 00:11:16.310 }' 00:11:16.310 19:09:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:16.310 19:09:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.568 19:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:16.568 19:09:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.568 19:09:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.568 [2024-11-27 19:09:26.160628] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:16.568 [2024-11-27 19:09:26.160680] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:16.568 [2024-11-27 19:09:26.160690] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:11:16.568 [2024-11-27 19:09:26.161157] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:16.568 [2024-11-27 19:09:26.161362] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:16.568 [2024-11-27 19:09:26.161376] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:16.568 [2024-11-27 19:09:26.161674] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:16.568 BaseBdev4 00:11:16.568 19:09:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.568 19:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:11:16.568 19:09:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:16.568 19:09:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:16.568 19:09:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:16.568 19:09:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:16.568 19:09:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:16.568 19:09:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:16.568 19:09:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.568 19:09:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.568 19:09:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.569 19:09:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:16.569 19:09:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.569 19:09:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.569 [ 00:11:16.569 { 00:11:16.569 "name": "BaseBdev4", 00:11:16.569 "aliases": [ 00:11:16.569 "390a5809-8f0b-4758-aa83-8b13ff408d8e" 00:11:16.569 ], 00:11:16.569 "product_name": "Malloc disk", 00:11:16.569 "block_size": 512, 00:11:16.569 "num_blocks": 65536, 00:11:16.569 "uuid": "390a5809-8f0b-4758-aa83-8b13ff408d8e", 00:11:16.569 "assigned_rate_limits": { 00:11:16.569 "rw_ios_per_sec": 0, 00:11:16.569 "rw_mbytes_per_sec": 0, 00:11:16.569 "r_mbytes_per_sec": 0, 00:11:16.569 "w_mbytes_per_sec": 0 00:11:16.569 }, 00:11:16.569 "claimed": true, 00:11:16.569 "claim_type": "exclusive_write", 00:11:16.569 "zoned": false, 00:11:16.569 "supported_io_types": { 00:11:16.569 "read": true, 00:11:16.569 "write": true, 00:11:16.569 "unmap": true, 00:11:16.569 "flush": true, 00:11:16.569 "reset": true, 00:11:16.569 "nvme_admin": false, 00:11:16.569 "nvme_io": false, 00:11:16.569 "nvme_io_md": false, 00:11:16.569 "write_zeroes": true, 00:11:16.569 "zcopy": true, 00:11:16.569 "get_zone_info": false, 00:11:16.569 "zone_management": false, 00:11:16.569 "zone_append": false, 00:11:16.569 "compare": false, 00:11:16.569 "compare_and_write": false, 00:11:16.569 "abort": true, 00:11:16.569 "seek_hole": false, 00:11:16.569 "seek_data": false, 00:11:16.569 "copy": true, 00:11:16.569 "nvme_iov_md": false 00:11:16.569 }, 00:11:16.569 "memory_domains": [ 00:11:16.569 { 00:11:16.569 "dma_device_id": "system", 00:11:16.569 "dma_device_type": 1 00:11:16.569 }, 00:11:16.569 { 00:11:16.569 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:16.569 "dma_device_type": 2 00:11:16.569 } 00:11:16.569 ], 00:11:16.828 "driver_specific": {} 00:11:16.828 } 00:11:16.828 ] 00:11:16.828 19:09:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.828 19:09:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:16.828 19:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:16.828 19:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:16.829 19:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:11:16.829 19:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:16.829 19:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:16.829 19:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:16.829 19:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:16.829 19:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:16.829 19:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:16.829 19:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:16.829 19:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:16.829 19:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:16.829 19:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:16.829 19:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:16.829 19:09:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.829 19:09:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.829 19:09:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.829 19:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:16.829 "name": "Existed_Raid", 00:11:16.829 "uuid": "633b1cf8-7c82-463c-8ca7-ecacff706a83", 00:11:16.829 "strip_size_kb": 64, 00:11:16.829 "state": "online", 00:11:16.829 "raid_level": "raid0", 00:11:16.829 "superblock": false, 00:11:16.829 "num_base_bdevs": 4, 00:11:16.829 "num_base_bdevs_discovered": 4, 00:11:16.829 "num_base_bdevs_operational": 4, 00:11:16.829 "base_bdevs_list": [ 00:11:16.829 { 00:11:16.829 "name": "BaseBdev1", 00:11:16.829 "uuid": "d87a43be-9bae-4b43-a03b-33b7d2788908", 00:11:16.829 "is_configured": true, 00:11:16.829 "data_offset": 0, 00:11:16.829 "data_size": 65536 00:11:16.829 }, 00:11:16.829 { 00:11:16.829 "name": "BaseBdev2", 00:11:16.829 "uuid": "639dd6ed-a4dd-437d-873e-f20bb15eee24", 00:11:16.829 "is_configured": true, 00:11:16.829 "data_offset": 0, 00:11:16.829 "data_size": 65536 00:11:16.829 }, 00:11:16.829 { 00:11:16.829 "name": "BaseBdev3", 00:11:16.829 "uuid": "17d7bd54-26e8-4447-83d4-8222ce7c98be", 00:11:16.829 "is_configured": true, 00:11:16.829 "data_offset": 0, 00:11:16.829 "data_size": 65536 00:11:16.829 }, 00:11:16.829 { 00:11:16.829 "name": "BaseBdev4", 00:11:16.829 "uuid": "390a5809-8f0b-4758-aa83-8b13ff408d8e", 00:11:16.829 "is_configured": true, 00:11:16.829 "data_offset": 0, 00:11:16.829 "data_size": 65536 00:11:16.829 } 00:11:16.829 ] 00:11:16.829 }' 00:11:16.829 19:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:16.829 19:09:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.089 19:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:17.089 19:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:17.089 19:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:17.089 19:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:17.089 19:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:17.089 19:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:17.089 19:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:17.089 19:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:17.089 19:09:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.089 19:09:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.089 [2024-11-27 19:09:26.640256] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:17.089 19:09:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.089 19:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:17.090 "name": "Existed_Raid", 00:11:17.090 "aliases": [ 00:11:17.090 "633b1cf8-7c82-463c-8ca7-ecacff706a83" 00:11:17.090 ], 00:11:17.090 "product_name": "Raid Volume", 00:11:17.090 "block_size": 512, 00:11:17.090 "num_blocks": 262144, 00:11:17.090 "uuid": "633b1cf8-7c82-463c-8ca7-ecacff706a83", 00:11:17.090 "assigned_rate_limits": { 00:11:17.090 "rw_ios_per_sec": 0, 00:11:17.090 "rw_mbytes_per_sec": 0, 00:11:17.090 "r_mbytes_per_sec": 0, 00:11:17.090 "w_mbytes_per_sec": 0 00:11:17.090 }, 00:11:17.090 "claimed": false, 00:11:17.090 "zoned": false, 00:11:17.090 "supported_io_types": { 00:11:17.090 "read": true, 00:11:17.090 "write": true, 00:11:17.090 "unmap": true, 00:11:17.090 "flush": true, 00:11:17.090 "reset": true, 00:11:17.090 "nvme_admin": false, 00:11:17.090 "nvme_io": false, 00:11:17.090 "nvme_io_md": false, 00:11:17.090 "write_zeroes": true, 00:11:17.090 "zcopy": false, 00:11:17.090 "get_zone_info": false, 00:11:17.090 "zone_management": false, 00:11:17.090 "zone_append": false, 00:11:17.090 "compare": false, 00:11:17.090 "compare_and_write": false, 00:11:17.090 "abort": false, 00:11:17.090 "seek_hole": false, 00:11:17.090 "seek_data": false, 00:11:17.090 "copy": false, 00:11:17.090 "nvme_iov_md": false 00:11:17.090 }, 00:11:17.090 "memory_domains": [ 00:11:17.090 { 00:11:17.090 "dma_device_id": "system", 00:11:17.090 "dma_device_type": 1 00:11:17.090 }, 00:11:17.090 { 00:11:17.090 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:17.090 "dma_device_type": 2 00:11:17.090 }, 00:11:17.090 { 00:11:17.090 "dma_device_id": "system", 00:11:17.090 "dma_device_type": 1 00:11:17.090 }, 00:11:17.090 { 00:11:17.090 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:17.090 "dma_device_type": 2 00:11:17.090 }, 00:11:17.090 { 00:11:17.090 "dma_device_id": "system", 00:11:17.090 "dma_device_type": 1 00:11:17.090 }, 00:11:17.090 { 00:11:17.090 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:17.090 "dma_device_type": 2 00:11:17.090 }, 00:11:17.090 { 00:11:17.090 "dma_device_id": "system", 00:11:17.090 "dma_device_type": 1 00:11:17.090 }, 00:11:17.090 { 00:11:17.090 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:17.090 "dma_device_type": 2 00:11:17.090 } 00:11:17.090 ], 00:11:17.090 "driver_specific": { 00:11:17.090 "raid": { 00:11:17.090 "uuid": "633b1cf8-7c82-463c-8ca7-ecacff706a83", 00:11:17.090 "strip_size_kb": 64, 00:11:17.090 "state": "online", 00:11:17.090 "raid_level": "raid0", 00:11:17.090 "superblock": false, 00:11:17.090 "num_base_bdevs": 4, 00:11:17.090 "num_base_bdevs_discovered": 4, 00:11:17.090 "num_base_bdevs_operational": 4, 00:11:17.090 "base_bdevs_list": [ 00:11:17.090 { 00:11:17.090 "name": "BaseBdev1", 00:11:17.090 "uuid": "d87a43be-9bae-4b43-a03b-33b7d2788908", 00:11:17.090 "is_configured": true, 00:11:17.090 "data_offset": 0, 00:11:17.090 "data_size": 65536 00:11:17.090 }, 00:11:17.090 { 00:11:17.090 "name": "BaseBdev2", 00:11:17.090 "uuid": "639dd6ed-a4dd-437d-873e-f20bb15eee24", 00:11:17.090 "is_configured": true, 00:11:17.090 "data_offset": 0, 00:11:17.090 "data_size": 65536 00:11:17.090 }, 00:11:17.090 { 00:11:17.090 "name": "BaseBdev3", 00:11:17.090 "uuid": "17d7bd54-26e8-4447-83d4-8222ce7c98be", 00:11:17.090 "is_configured": true, 00:11:17.090 "data_offset": 0, 00:11:17.090 "data_size": 65536 00:11:17.090 }, 00:11:17.090 { 00:11:17.090 "name": "BaseBdev4", 00:11:17.090 "uuid": "390a5809-8f0b-4758-aa83-8b13ff408d8e", 00:11:17.090 "is_configured": true, 00:11:17.090 "data_offset": 0, 00:11:17.090 "data_size": 65536 00:11:17.090 } 00:11:17.090 ] 00:11:17.090 } 00:11:17.090 } 00:11:17.090 }' 00:11:17.090 19:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:17.090 19:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:17.090 BaseBdev2 00:11:17.090 BaseBdev3 00:11:17.090 BaseBdev4' 00:11:17.090 19:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:17.350 19:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:17.350 19:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:17.350 19:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:17.350 19:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:17.350 19:09:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.350 19:09:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.350 19:09:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.350 19:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:17.350 19:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:17.350 19:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:17.350 19:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:17.350 19:09:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.350 19:09:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.350 19:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:17.350 19:09:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.350 19:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:17.350 19:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:17.350 19:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:17.350 19:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:17.350 19:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:17.350 19:09:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.350 19:09:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.350 19:09:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.350 19:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:17.350 19:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:17.350 19:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:17.350 19:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:17.350 19:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:17.350 19:09:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.350 19:09:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.350 19:09:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.350 19:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:17.350 19:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:17.350 19:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:17.350 19:09:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.350 19:09:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.350 [2024-11-27 19:09:26.959340] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:17.350 [2024-11-27 19:09:26.959375] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:17.350 [2024-11-27 19:09:26.959435] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:17.610 19:09:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.610 19:09:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:17.610 19:09:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:11:17.610 19:09:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:17.610 19:09:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:17.610 19:09:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:11:17.610 19:09:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:11:17.610 19:09:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:17.610 19:09:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:11:17.610 19:09:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:17.610 19:09:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:17.610 19:09:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:17.610 19:09:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:17.610 19:09:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:17.610 19:09:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:17.610 19:09:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:17.610 19:09:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:17.610 19:09:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.610 19:09:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:17.610 19:09:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.610 19:09:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.610 19:09:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:17.610 "name": "Existed_Raid", 00:11:17.610 "uuid": "633b1cf8-7c82-463c-8ca7-ecacff706a83", 00:11:17.610 "strip_size_kb": 64, 00:11:17.610 "state": "offline", 00:11:17.610 "raid_level": "raid0", 00:11:17.610 "superblock": false, 00:11:17.610 "num_base_bdevs": 4, 00:11:17.610 "num_base_bdevs_discovered": 3, 00:11:17.610 "num_base_bdevs_operational": 3, 00:11:17.610 "base_bdevs_list": [ 00:11:17.610 { 00:11:17.610 "name": null, 00:11:17.610 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:17.610 "is_configured": false, 00:11:17.610 "data_offset": 0, 00:11:17.610 "data_size": 65536 00:11:17.610 }, 00:11:17.610 { 00:11:17.610 "name": "BaseBdev2", 00:11:17.610 "uuid": "639dd6ed-a4dd-437d-873e-f20bb15eee24", 00:11:17.610 "is_configured": true, 00:11:17.610 "data_offset": 0, 00:11:17.610 "data_size": 65536 00:11:17.610 }, 00:11:17.610 { 00:11:17.610 "name": "BaseBdev3", 00:11:17.610 "uuid": "17d7bd54-26e8-4447-83d4-8222ce7c98be", 00:11:17.610 "is_configured": true, 00:11:17.610 "data_offset": 0, 00:11:17.610 "data_size": 65536 00:11:17.610 }, 00:11:17.610 { 00:11:17.610 "name": "BaseBdev4", 00:11:17.610 "uuid": "390a5809-8f0b-4758-aa83-8b13ff408d8e", 00:11:17.610 "is_configured": true, 00:11:17.610 "data_offset": 0, 00:11:17.611 "data_size": 65536 00:11:17.611 } 00:11:17.611 ] 00:11:17.611 }' 00:11:17.611 19:09:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:17.611 19:09:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.870 19:09:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:17.870 19:09:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:17.870 19:09:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:17.871 19:09:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:17.871 19:09:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.871 19:09:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.871 19:09:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.871 19:09:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:17.871 19:09:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:17.871 19:09:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:17.871 19:09:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.871 19:09:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.871 [2024-11-27 19:09:27.474259] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:18.131 19:09:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.131 19:09:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:18.131 19:09:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:18.131 19:09:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:18.131 19:09:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:18.131 19:09:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.131 19:09:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.131 19:09:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.131 19:09:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:18.131 19:09:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:18.131 19:09:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:18.131 19:09:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.131 19:09:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.131 [2024-11-27 19:09:27.637424] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:18.131 19:09:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.131 19:09:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:18.131 19:09:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:18.131 19:09:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:18.131 19:09:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.131 19:09:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.131 19:09:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:18.391 19:09:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.391 19:09:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:18.391 19:09:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:18.391 19:09:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:11:18.391 19:09:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.391 19:09:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.391 [2024-11-27 19:09:27.786243] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:11:18.391 [2024-11-27 19:09:27.786306] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:18.391 19:09:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.391 19:09:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:18.391 19:09:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:18.391 19:09:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:18.391 19:09:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:18.391 19:09:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.391 19:09:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.391 19:09:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.391 19:09:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:18.391 19:09:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:18.391 19:09:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:11:18.391 19:09:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:18.391 19:09:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:18.391 19:09:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:18.391 19:09:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.391 19:09:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.391 BaseBdev2 00:11:18.391 19:09:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.391 19:09:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:18.391 19:09:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:18.391 19:09:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:18.391 19:09:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:18.391 19:09:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:18.391 19:09:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:18.391 19:09:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:18.391 19:09:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.391 19:09:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.391 19:09:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.391 19:09:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:18.391 19:09:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.391 19:09:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.391 [ 00:11:18.391 { 00:11:18.391 "name": "BaseBdev2", 00:11:18.391 "aliases": [ 00:11:18.391 "f9054832-4cbc-4a50-b6a6-662d1580861b" 00:11:18.391 ], 00:11:18.391 "product_name": "Malloc disk", 00:11:18.391 "block_size": 512, 00:11:18.391 "num_blocks": 65536, 00:11:18.391 "uuid": "f9054832-4cbc-4a50-b6a6-662d1580861b", 00:11:18.391 "assigned_rate_limits": { 00:11:18.391 "rw_ios_per_sec": 0, 00:11:18.391 "rw_mbytes_per_sec": 0, 00:11:18.391 "r_mbytes_per_sec": 0, 00:11:18.651 "w_mbytes_per_sec": 0 00:11:18.651 }, 00:11:18.651 "claimed": false, 00:11:18.651 "zoned": false, 00:11:18.651 "supported_io_types": { 00:11:18.651 "read": true, 00:11:18.651 "write": true, 00:11:18.651 "unmap": true, 00:11:18.651 "flush": true, 00:11:18.651 "reset": true, 00:11:18.651 "nvme_admin": false, 00:11:18.651 "nvme_io": false, 00:11:18.651 "nvme_io_md": false, 00:11:18.651 "write_zeroes": true, 00:11:18.651 "zcopy": true, 00:11:18.651 "get_zone_info": false, 00:11:18.651 "zone_management": false, 00:11:18.651 "zone_append": false, 00:11:18.651 "compare": false, 00:11:18.651 "compare_and_write": false, 00:11:18.651 "abort": true, 00:11:18.651 "seek_hole": false, 00:11:18.651 "seek_data": false, 00:11:18.651 "copy": true, 00:11:18.651 "nvme_iov_md": false 00:11:18.651 }, 00:11:18.651 "memory_domains": [ 00:11:18.651 { 00:11:18.651 "dma_device_id": "system", 00:11:18.651 "dma_device_type": 1 00:11:18.651 }, 00:11:18.651 { 00:11:18.651 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:18.651 "dma_device_type": 2 00:11:18.651 } 00:11:18.651 ], 00:11:18.651 "driver_specific": {} 00:11:18.651 } 00:11:18.651 ] 00:11:18.651 19:09:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.651 19:09:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:18.651 19:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:18.651 19:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:18.652 19:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:18.652 19:09:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.652 19:09:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.652 BaseBdev3 00:11:18.652 19:09:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.652 19:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:18.652 19:09:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:18.652 19:09:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:18.652 19:09:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:18.652 19:09:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:18.652 19:09:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:18.652 19:09:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:18.652 19:09:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.652 19:09:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.652 19:09:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.652 19:09:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:18.652 19:09:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.652 19:09:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.652 [ 00:11:18.652 { 00:11:18.652 "name": "BaseBdev3", 00:11:18.652 "aliases": [ 00:11:18.652 "2d4790b6-9a08-4f62-b59c-4cc3121ddd4f" 00:11:18.652 ], 00:11:18.652 "product_name": "Malloc disk", 00:11:18.652 "block_size": 512, 00:11:18.652 "num_blocks": 65536, 00:11:18.652 "uuid": "2d4790b6-9a08-4f62-b59c-4cc3121ddd4f", 00:11:18.652 "assigned_rate_limits": { 00:11:18.652 "rw_ios_per_sec": 0, 00:11:18.652 "rw_mbytes_per_sec": 0, 00:11:18.652 "r_mbytes_per_sec": 0, 00:11:18.652 "w_mbytes_per_sec": 0 00:11:18.652 }, 00:11:18.652 "claimed": false, 00:11:18.652 "zoned": false, 00:11:18.652 "supported_io_types": { 00:11:18.652 "read": true, 00:11:18.652 "write": true, 00:11:18.652 "unmap": true, 00:11:18.652 "flush": true, 00:11:18.652 "reset": true, 00:11:18.652 "nvme_admin": false, 00:11:18.652 "nvme_io": false, 00:11:18.652 "nvme_io_md": false, 00:11:18.652 "write_zeroes": true, 00:11:18.652 "zcopy": true, 00:11:18.652 "get_zone_info": false, 00:11:18.652 "zone_management": false, 00:11:18.652 "zone_append": false, 00:11:18.652 "compare": false, 00:11:18.652 "compare_and_write": false, 00:11:18.652 "abort": true, 00:11:18.652 "seek_hole": false, 00:11:18.652 "seek_data": false, 00:11:18.652 "copy": true, 00:11:18.652 "nvme_iov_md": false 00:11:18.652 }, 00:11:18.652 "memory_domains": [ 00:11:18.652 { 00:11:18.652 "dma_device_id": "system", 00:11:18.652 "dma_device_type": 1 00:11:18.652 }, 00:11:18.652 { 00:11:18.652 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:18.652 "dma_device_type": 2 00:11:18.652 } 00:11:18.652 ], 00:11:18.652 "driver_specific": {} 00:11:18.652 } 00:11:18.652 ] 00:11:18.652 19:09:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.652 19:09:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:18.652 19:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:18.652 19:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:18.652 19:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:18.652 19:09:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.652 19:09:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.652 BaseBdev4 00:11:18.652 19:09:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.652 19:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:11:18.652 19:09:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:18.652 19:09:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:18.652 19:09:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:18.652 19:09:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:18.652 19:09:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:18.652 19:09:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:18.652 19:09:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.652 19:09:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.652 19:09:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.652 19:09:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:18.652 19:09:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.652 19:09:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.652 [ 00:11:18.652 { 00:11:18.652 "name": "BaseBdev4", 00:11:18.652 "aliases": [ 00:11:18.652 "69d37c6a-b0b4-4780-bff1-c21f65db844b" 00:11:18.652 ], 00:11:18.652 "product_name": "Malloc disk", 00:11:18.652 "block_size": 512, 00:11:18.652 "num_blocks": 65536, 00:11:18.652 "uuid": "69d37c6a-b0b4-4780-bff1-c21f65db844b", 00:11:18.652 "assigned_rate_limits": { 00:11:18.652 "rw_ios_per_sec": 0, 00:11:18.652 "rw_mbytes_per_sec": 0, 00:11:18.652 "r_mbytes_per_sec": 0, 00:11:18.652 "w_mbytes_per_sec": 0 00:11:18.652 }, 00:11:18.652 "claimed": false, 00:11:18.652 "zoned": false, 00:11:18.652 "supported_io_types": { 00:11:18.652 "read": true, 00:11:18.652 "write": true, 00:11:18.652 "unmap": true, 00:11:18.652 "flush": true, 00:11:18.652 "reset": true, 00:11:18.652 "nvme_admin": false, 00:11:18.652 "nvme_io": false, 00:11:18.652 "nvme_io_md": false, 00:11:18.652 "write_zeroes": true, 00:11:18.652 "zcopy": true, 00:11:18.652 "get_zone_info": false, 00:11:18.652 "zone_management": false, 00:11:18.652 "zone_append": false, 00:11:18.652 "compare": false, 00:11:18.652 "compare_and_write": false, 00:11:18.652 "abort": true, 00:11:18.652 "seek_hole": false, 00:11:18.652 "seek_data": false, 00:11:18.652 "copy": true, 00:11:18.652 "nvme_iov_md": false 00:11:18.652 }, 00:11:18.652 "memory_domains": [ 00:11:18.652 { 00:11:18.652 "dma_device_id": "system", 00:11:18.652 "dma_device_type": 1 00:11:18.652 }, 00:11:18.652 { 00:11:18.652 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:18.652 "dma_device_type": 2 00:11:18.652 } 00:11:18.652 ], 00:11:18.652 "driver_specific": {} 00:11:18.652 } 00:11:18.652 ] 00:11:18.652 19:09:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.652 19:09:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:18.652 19:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:18.652 19:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:18.652 19:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:18.652 19:09:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.652 19:09:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.652 [2024-11-27 19:09:28.215075] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:18.652 [2024-11-27 19:09:28.215176] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:18.652 [2024-11-27 19:09:28.215208] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:18.652 [2024-11-27 19:09:28.217621] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:18.652 [2024-11-27 19:09:28.217677] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:18.652 19:09:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.652 19:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:18.652 19:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:18.652 19:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:18.652 19:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:18.652 19:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:18.652 19:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:18.652 19:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:18.652 19:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:18.652 19:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:18.652 19:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:18.652 19:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:18.652 19:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:18.652 19:09:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.652 19:09:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.652 19:09:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.652 19:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:18.652 "name": "Existed_Raid", 00:11:18.652 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:18.652 "strip_size_kb": 64, 00:11:18.652 "state": "configuring", 00:11:18.652 "raid_level": "raid0", 00:11:18.652 "superblock": false, 00:11:18.653 "num_base_bdevs": 4, 00:11:18.653 "num_base_bdevs_discovered": 3, 00:11:18.653 "num_base_bdevs_operational": 4, 00:11:18.653 "base_bdevs_list": [ 00:11:18.653 { 00:11:18.653 "name": "BaseBdev1", 00:11:18.653 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:18.653 "is_configured": false, 00:11:18.653 "data_offset": 0, 00:11:18.653 "data_size": 0 00:11:18.653 }, 00:11:18.653 { 00:11:18.653 "name": "BaseBdev2", 00:11:18.653 "uuid": "f9054832-4cbc-4a50-b6a6-662d1580861b", 00:11:18.653 "is_configured": true, 00:11:18.653 "data_offset": 0, 00:11:18.653 "data_size": 65536 00:11:18.653 }, 00:11:18.653 { 00:11:18.653 "name": "BaseBdev3", 00:11:18.653 "uuid": "2d4790b6-9a08-4f62-b59c-4cc3121ddd4f", 00:11:18.653 "is_configured": true, 00:11:18.653 "data_offset": 0, 00:11:18.653 "data_size": 65536 00:11:18.653 }, 00:11:18.653 { 00:11:18.653 "name": "BaseBdev4", 00:11:18.653 "uuid": "69d37c6a-b0b4-4780-bff1-c21f65db844b", 00:11:18.653 "is_configured": true, 00:11:18.653 "data_offset": 0, 00:11:18.653 "data_size": 65536 00:11:18.653 } 00:11:18.653 ] 00:11:18.653 }' 00:11:18.653 19:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:18.653 19:09:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.221 19:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:19.221 19:09:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.221 19:09:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.221 [2024-11-27 19:09:28.594452] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:19.221 19:09:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.222 19:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:19.222 19:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:19.222 19:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:19.222 19:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:19.222 19:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:19.222 19:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:19.222 19:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:19.222 19:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:19.222 19:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:19.222 19:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:19.222 19:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:19.222 19:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:19.222 19:09:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.222 19:09:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.222 19:09:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.222 19:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:19.222 "name": "Existed_Raid", 00:11:19.222 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:19.222 "strip_size_kb": 64, 00:11:19.222 "state": "configuring", 00:11:19.222 "raid_level": "raid0", 00:11:19.222 "superblock": false, 00:11:19.222 "num_base_bdevs": 4, 00:11:19.222 "num_base_bdevs_discovered": 2, 00:11:19.222 "num_base_bdevs_operational": 4, 00:11:19.222 "base_bdevs_list": [ 00:11:19.222 { 00:11:19.222 "name": "BaseBdev1", 00:11:19.222 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:19.222 "is_configured": false, 00:11:19.222 "data_offset": 0, 00:11:19.222 "data_size": 0 00:11:19.222 }, 00:11:19.222 { 00:11:19.222 "name": null, 00:11:19.222 "uuid": "f9054832-4cbc-4a50-b6a6-662d1580861b", 00:11:19.222 "is_configured": false, 00:11:19.222 "data_offset": 0, 00:11:19.222 "data_size": 65536 00:11:19.222 }, 00:11:19.222 { 00:11:19.222 "name": "BaseBdev3", 00:11:19.222 "uuid": "2d4790b6-9a08-4f62-b59c-4cc3121ddd4f", 00:11:19.222 "is_configured": true, 00:11:19.222 "data_offset": 0, 00:11:19.222 "data_size": 65536 00:11:19.222 }, 00:11:19.222 { 00:11:19.222 "name": "BaseBdev4", 00:11:19.222 "uuid": "69d37c6a-b0b4-4780-bff1-c21f65db844b", 00:11:19.222 "is_configured": true, 00:11:19.222 "data_offset": 0, 00:11:19.222 "data_size": 65536 00:11:19.222 } 00:11:19.222 ] 00:11:19.222 }' 00:11:19.222 19:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:19.222 19:09:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.481 19:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:19.481 19:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:19.481 19:09:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.481 19:09:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.481 19:09:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.481 19:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:19.481 19:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:19.481 19:09:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.481 19:09:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.481 [2024-11-27 19:09:29.112012] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:19.481 BaseBdev1 00:11:19.481 19:09:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.481 19:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:19.481 19:09:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:19.481 19:09:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:19.481 19:09:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:19.481 19:09:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:19.482 19:09:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:19.482 19:09:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:19.482 19:09:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.482 19:09:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.741 19:09:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.741 19:09:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:19.741 19:09:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.741 19:09:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.741 [ 00:11:19.741 { 00:11:19.741 "name": "BaseBdev1", 00:11:19.741 "aliases": [ 00:11:19.741 "f493d2d1-9d1b-4cc8-a8a2-f531ae4af7ee" 00:11:19.741 ], 00:11:19.741 "product_name": "Malloc disk", 00:11:19.741 "block_size": 512, 00:11:19.741 "num_blocks": 65536, 00:11:19.741 "uuid": "f493d2d1-9d1b-4cc8-a8a2-f531ae4af7ee", 00:11:19.741 "assigned_rate_limits": { 00:11:19.741 "rw_ios_per_sec": 0, 00:11:19.741 "rw_mbytes_per_sec": 0, 00:11:19.741 "r_mbytes_per_sec": 0, 00:11:19.741 "w_mbytes_per_sec": 0 00:11:19.741 }, 00:11:19.741 "claimed": true, 00:11:19.741 "claim_type": "exclusive_write", 00:11:19.741 "zoned": false, 00:11:19.741 "supported_io_types": { 00:11:19.741 "read": true, 00:11:19.741 "write": true, 00:11:19.741 "unmap": true, 00:11:19.741 "flush": true, 00:11:19.741 "reset": true, 00:11:19.741 "nvme_admin": false, 00:11:19.741 "nvme_io": false, 00:11:19.741 "nvme_io_md": false, 00:11:19.741 "write_zeroes": true, 00:11:19.741 "zcopy": true, 00:11:19.741 "get_zone_info": false, 00:11:19.741 "zone_management": false, 00:11:19.741 "zone_append": false, 00:11:19.741 "compare": false, 00:11:19.741 "compare_and_write": false, 00:11:19.741 "abort": true, 00:11:19.741 "seek_hole": false, 00:11:19.741 "seek_data": false, 00:11:19.741 "copy": true, 00:11:19.741 "nvme_iov_md": false 00:11:19.741 }, 00:11:19.741 "memory_domains": [ 00:11:19.741 { 00:11:19.741 "dma_device_id": "system", 00:11:19.741 "dma_device_type": 1 00:11:19.741 }, 00:11:19.741 { 00:11:19.741 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:19.741 "dma_device_type": 2 00:11:19.741 } 00:11:19.741 ], 00:11:19.741 "driver_specific": {} 00:11:19.741 } 00:11:19.741 ] 00:11:19.741 19:09:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.741 19:09:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:19.741 19:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:19.741 19:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:19.741 19:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:19.741 19:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:19.741 19:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:19.741 19:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:19.741 19:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:19.741 19:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:19.741 19:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:19.741 19:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:19.741 19:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:19.741 19:09:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.741 19:09:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.741 19:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:19.741 19:09:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.741 19:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:19.741 "name": "Existed_Raid", 00:11:19.741 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:19.741 "strip_size_kb": 64, 00:11:19.741 "state": "configuring", 00:11:19.741 "raid_level": "raid0", 00:11:19.741 "superblock": false, 00:11:19.741 "num_base_bdevs": 4, 00:11:19.741 "num_base_bdevs_discovered": 3, 00:11:19.741 "num_base_bdevs_operational": 4, 00:11:19.741 "base_bdevs_list": [ 00:11:19.741 { 00:11:19.741 "name": "BaseBdev1", 00:11:19.741 "uuid": "f493d2d1-9d1b-4cc8-a8a2-f531ae4af7ee", 00:11:19.741 "is_configured": true, 00:11:19.741 "data_offset": 0, 00:11:19.741 "data_size": 65536 00:11:19.741 }, 00:11:19.741 { 00:11:19.741 "name": null, 00:11:19.741 "uuid": "f9054832-4cbc-4a50-b6a6-662d1580861b", 00:11:19.741 "is_configured": false, 00:11:19.741 "data_offset": 0, 00:11:19.741 "data_size": 65536 00:11:19.741 }, 00:11:19.741 { 00:11:19.741 "name": "BaseBdev3", 00:11:19.741 "uuid": "2d4790b6-9a08-4f62-b59c-4cc3121ddd4f", 00:11:19.741 "is_configured": true, 00:11:19.741 "data_offset": 0, 00:11:19.741 "data_size": 65536 00:11:19.741 }, 00:11:19.741 { 00:11:19.741 "name": "BaseBdev4", 00:11:19.741 "uuid": "69d37c6a-b0b4-4780-bff1-c21f65db844b", 00:11:19.741 "is_configured": true, 00:11:19.741 "data_offset": 0, 00:11:19.741 "data_size": 65536 00:11:19.741 } 00:11:19.741 ] 00:11:19.741 }' 00:11:19.741 19:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:19.741 19:09:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.999 19:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:19.999 19:09:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.999 19:09:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.999 19:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:20.258 19:09:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.258 19:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:20.258 19:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:20.258 19:09:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.258 19:09:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.258 [2024-11-27 19:09:29.675198] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:20.258 19:09:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.258 19:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:20.258 19:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:20.259 19:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:20.259 19:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:20.259 19:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:20.259 19:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:20.259 19:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:20.259 19:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:20.259 19:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:20.259 19:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:20.259 19:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:20.259 19:09:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.259 19:09:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.259 19:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:20.259 19:09:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.259 19:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:20.259 "name": "Existed_Raid", 00:11:20.259 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:20.259 "strip_size_kb": 64, 00:11:20.259 "state": "configuring", 00:11:20.259 "raid_level": "raid0", 00:11:20.259 "superblock": false, 00:11:20.259 "num_base_bdevs": 4, 00:11:20.259 "num_base_bdevs_discovered": 2, 00:11:20.259 "num_base_bdevs_operational": 4, 00:11:20.259 "base_bdevs_list": [ 00:11:20.259 { 00:11:20.259 "name": "BaseBdev1", 00:11:20.259 "uuid": "f493d2d1-9d1b-4cc8-a8a2-f531ae4af7ee", 00:11:20.259 "is_configured": true, 00:11:20.259 "data_offset": 0, 00:11:20.259 "data_size": 65536 00:11:20.259 }, 00:11:20.259 { 00:11:20.259 "name": null, 00:11:20.259 "uuid": "f9054832-4cbc-4a50-b6a6-662d1580861b", 00:11:20.259 "is_configured": false, 00:11:20.259 "data_offset": 0, 00:11:20.259 "data_size": 65536 00:11:20.259 }, 00:11:20.259 { 00:11:20.259 "name": null, 00:11:20.259 "uuid": "2d4790b6-9a08-4f62-b59c-4cc3121ddd4f", 00:11:20.259 "is_configured": false, 00:11:20.259 "data_offset": 0, 00:11:20.259 "data_size": 65536 00:11:20.259 }, 00:11:20.259 { 00:11:20.259 "name": "BaseBdev4", 00:11:20.259 "uuid": "69d37c6a-b0b4-4780-bff1-c21f65db844b", 00:11:20.259 "is_configured": true, 00:11:20.259 "data_offset": 0, 00:11:20.259 "data_size": 65536 00:11:20.259 } 00:11:20.259 ] 00:11:20.259 }' 00:11:20.259 19:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:20.259 19:09:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.518 19:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:20.518 19:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:20.518 19:09:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.518 19:09:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.518 19:09:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.518 19:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:20.518 19:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:20.518 19:09:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.518 19:09:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.518 [2024-11-27 19:09:30.142368] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:20.518 19:09:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.518 19:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:20.518 19:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:20.518 19:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:20.518 19:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:20.518 19:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:20.518 19:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:20.518 19:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:20.518 19:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:20.518 19:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:20.518 19:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:20.778 19:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:20.778 19:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:20.778 19:09:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.778 19:09:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.778 19:09:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.778 19:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:20.778 "name": "Existed_Raid", 00:11:20.778 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:20.778 "strip_size_kb": 64, 00:11:20.778 "state": "configuring", 00:11:20.778 "raid_level": "raid0", 00:11:20.778 "superblock": false, 00:11:20.778 "num_base_bdevs": 4, 00:11:20.778 "num_base_bdevs_discovered": 3, 00:11:20.778 "num_base_bdevs_operational": 4, 00:11:20.778 "base_bdevs_list": [ 00:11:20.778 { 00:11:20.778 "name": "BaseBdev1", 00:11:20.778 "uuid": "f493d2d1-9d1b-4cc8-a8a2-f531ae4af7ee", 00:11:20.778 "is_configured": true, 00:11:20.778 "data_offset": 0, 00:11:20.778 "data_size": 65536 00:11:20.778 }, 00:11:20.778 { 00:11:20.778 "name": null, 00:11:20.778 "uuid": "f9054832-4cbc-4a50-b6a6-662d1580861b", 00:11:20.778 "is_configured": false, 00:11:20.778 "data_offset": 0, 00:11:20.778 "data_size": 65536 00:11:20.778 }, 00:11:20.778 { 00:11:20.778 "name": "BaseBdev3", 00:11:20.778 "uuid": "2d4790b6-9a08-4f62-b59c-4cc3121ddd4f", 00:11:20.778 "is_configured": true, 00:11:20.778 "data_offset": 0, 00:11:20.778 "data_size": 65536 00:11:20.778 }, 00:11:20.778 { 00:11:20.778 "name": "BaseBdev4", 00:11:20.778 "uuid": "69d37c6a-b0b4-4780-bff1-c21f65db844b", 00:11:20.778 "is_configured": true, 00:11:20.778 "data_offset": 0, 00:11:20.778 "data_size": 65536 00:11:20.778 } 00:11:20.778 ] 00:11:20.778 }' 00:11:20.778 19:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:20.778 19:09:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.036 19:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:21.036 19:09:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.036 19:09:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.036 19:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:21.036 19:09:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.036 19:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:21.036 19:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:21.036 19:09:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.036 19:09:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.036 [2024-11-27 19:09:30.633545] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:21.296 19:09:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.296 19:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:21.296 19:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:21.296 19:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:21.296 19:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:21.296 19:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:21.296 19:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:21.296 19:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:21.296 19:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:21.296 19:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:21.296 19:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:21.296 19:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:21.296 19:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:21.296 19:09:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.296 19:09:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.296 19:09:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.296 19:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:21.296 "name": "Existed_Raid", 00:11:21.296 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:21.296 "strip_size_kb": 64, 00:11:21.296 "state": "configuring", 00:11:21.296 "raid_level": "raid0", 00:11:21.296 "superblock": false, 00:11:21.296 "num_base_bdevs": 4, 00:11:21.296 "num_base_bdevs_discovered": 2, 00:11:21.296 "num_base_bdevs_operational": 4, 00:11:21.296 "base_bdevs_list": [ 00:11:21.296 { 00:11:21.296 "name": null, 00:11:21.296 "uuid": "f493d2d1-9d1b-4cc8-a8a2-f531ae4af7ee", 00:11:21.296 "is_configured": false, 00:11:21.296 "data_offset": 0, 00:11:21.296 "data_size": 65536 00:11:21.296 }, 00:11:21.296 { 00:11:21.296 "name": null, 00:11:21.296 "uuid": "f9054832-4cbc-4a50-b6a6-662d1580861b", 00:11:21.296 "is_configured": false, 00:11:21.296 "data_offset": 0, 00:11:21.296 "data_size": 65536 00:11:21.296 }, 00:11:21.296 { 00:11:21.296 "name": "BaseBdev3", 00:11:21.296 "uuid": "2d4790b6-9a08-4f62-b59c-4cc3121ddd4f", 00:11:21.296 "is_configured": true, 00:11:21.296 "data_offset": 0, 00:11:21.296 "data_size": 65536 00:11:21.296 }, 00:11:21.296 { 00:11:21.297 "name": "BaseBdev4", 00:11:21.297 "uuid": "69d37c6a-b0b4-4780-bff1-c21f65db844b", 00:11:21.297 "is_configured": true, 00:11:21.297 "data_offset": 0, 00:11:21.297 "data_size": 65536 00:11:21.297 } 00:11:21.297 ] 00:11:21.297 }' 00:11:21.297 19:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:21.297 19:09:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.866 19:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:21.866 19:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:21.866 19:09:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.866 19:09:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.866 19:09:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.866 19:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:21.866 19:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:21.866 19:09:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.866 19:09:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.866 [2024-11-27 19:09:31.280468] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:21.866 19:09:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.866 19:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:21.866 19:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:21.866 19:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:21.866 19:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:21.866 19:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:21.866 19:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:21.866 19:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:21.866 19:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:21.866 19:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:21.866 19:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:21.866 19:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:21.866 19:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:21.866 19:09:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.866 19:09:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.866 19:09:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.866 19:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:21.866 "name": "Existed_Raid", 00:11:21.866 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:21.866 "strip_size_kb": 64, 00:11:21.866 "state": "configuring", 00:11:21.866 "raid_level": "raid0", 00:11:21.866 "superblock": false, 00:11:21.866 "num_base_bdevs": 4, 00:11:21.866 "num_base_bdevs_discovered": 3, 00:11:21.866 "num_base_bdevs_operational": 4, 00:11:21.866 "base_bdevs_list": [ 00:11:21.866 { 00:11:21.866 "name": null, 00:11:21.866 "uuid": "f493d2d1-9d1b-4cc8-a8a2-f531ae4af7ee", 00:11:21.866 "is_configured": false, 00:11:21.866 "data_offset": 0, 00:11:21.866 "data_size": 65536 00:11:21.866 }, 00:11:21.866 { 00:11:21.866 "name": "BaseBdev2", 00:11:21.866 "uuid": "f9054832-4cbc-4a50-b6a6-662d1580861b", 00:11:21.866 "is_configured": true, 00:11:21.866 "data_offset": 0, 00:11:21.866 "data_size": 65536 00:11:21.866 }, 00:11:21.866 { 00:11:21.866 "name": "BaseBdev3", 00:11:21.866 "uuid": "2d4790b6-9a08-4f62-b59c-4cc3121ddd4f", 00:11:21.866 "is_configured": true, 00:11:21.866 "data_offset": 0, 00:11:21.866 "data_size": 65536 00:11:21.866 }, 00:11:21.866 { 00:11:21.866 "name": "BaseBdev4", 00:11:21.866 "uuid": "69d37c6a-b0b4-4780-bff1-c21f65db844b", 00:11:21.866 "is_configured": true, 00:11:21.866 "data_offset": 0, 00:11:21.866 "data_size": 65536 00:11:21.866 } 00:11:21.866 ] 00:11:21.866 }' 00:11:21.866 19:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:21.866 19:09:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.126 19:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:22.126 19:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:22.126 19:09:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.126 19:09:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.126 19:09:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.385 19:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:22.385 19:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:22.385 19:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:22.385 19:09:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.385 19:09:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.385 19:09:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.385 19:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u f493d2d1-9d1b-4cc8-a8a2-f531ae4af7ee 00:11:22.385 19:09:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.385 19:09:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.385 [2024-11-27 19:09:31.862868] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:22.385 [2024-11-27 19:09:31.863003] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:22.385 [2024-11-27 19:09:31.863016] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:11:22.385 [2024-11-27 19:09:31.863358] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:22.385 [2024-11-27 19:09:31.863529] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:22.385 [2024-11-27 19:09:31.863542] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:22.385 [2024-11-27 19:09:31.863828] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:22.385 NewBaseBdev 00:11:22.385 19:09:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.385 19:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:22.385 19:09:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:11:22.385 19:09:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:22.385 19:09:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:22.385 19:09:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:22.385 19:09:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:22.385 19:09:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:22.385 19:09:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.385 19:09:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.385 19:09:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.385 19:09:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:22.385 19:09:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.385 19:09:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.385 [ 00:11:22.385 { 00:11:22.385 "name": "NewBaseBdev", 00:11:22.385 "aliases": [ 00:11:22.385 "f493d2d1-9d1b-4cc8-a8a2-f531ae4af7ee" 00:11:22.385 ], 00:11:22.385 "product_name": "Malloc disk", 00:11:22.385 "block_size": 512, 00:11:22.385 "num_blocks": 65536, 00:11:22.385 "uuid": "f493d2d1-9d1b-4cc8-a8a2-f531ae4af7ee", 00:11:22.385 "assigned_rate_limits": { 00:11:22.385 "rw_ios_per_sec": 0, 00:11:22.385 "rw_mbytes_per_sec": 0, 00:11:22.385 "r_mbytes_per_sec": 0, 00:11:22.385 "w_mbytes_per_sec": 0 00:11:22.385 }, 00:11:22.385 "claimed": true, 00:11:22.385 "claim_type": "exclusive_write", 00:11:22.385 "zoned": false, 00:11:22.385 "supported_io_types": { 00:11:22.385 "read": true, 00:11:22.385 "write": true, 00:11:22.385 "unmap": true, 00:11:22.386 "flush": true, 00:11:22.386 "reset": true, 00:11:22.386 "nvme_admin": false, 00:11:22.386 "nvme_io": false, 00:11:22.386 "nvme_io_md": false, 00:11:22.386 "write_zeroes": true, 00:11:22.386 "zcopy": true, 00:11:22.386 "get_zone_info": false, 00:11:22.386 "zone_management": false, 00:11:22.386 "zone_append": false, 00:11:22.386 "compare": false, 00:11:22.386 "compare_and_write": false, 00:11:22.386 "abort": true, 00:11:22.386 "seek_hole": false, 00:11:22.386 "seek_data": false, 00:11:22.386 "copy": true, 00:11:22.386 "nvme_iov_md": false 00:11:22.386 }, 00:11:22.386 "memory_domains": [ 00:11:22.386 { 00:11:22.386 "dma_device_id": "system", 00:11:22.386 "dma_device_type": 1 00:11:22.386 }, 00:11:22.386 { 00:11:22.386 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:22.386 "dma_device_type": 2 00:11:22.386 } 00:11:22.386 ], 00:11:22.386 "driver_specific": {} 00:11:22.386 } 00:11:22.386 ] 00:11:22.386 19:09:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.386 19:09:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:22.386 19:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:11:22.386 19:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:22.386 19:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:22.386 19:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:22.386 19:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:22.386 19:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:22.386 19:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:22.386 19:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:22.386 19:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:22.386 19:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:22.386 19:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:22.386 19:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:22.386 19:09:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.386 19:09:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.386 19:09:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.386 19:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:22.386 "name": "Existed_Raid", 00:11:22.386 "uuid": "80aa1e86-323e-473d-a6a5-d95b8ffa2e9e", 00:11:22.386 "strip_size_kb": 64, 00:11:22.386 "state": "online", 00:11:22.386 "raid_level": "raid0", 00:11:22.386 "superblock": false, 00:11:22.386 "num_base_bdevs": 4, 00:11:22.386 "num_base_bdevs_discovered": 4, 00:11:22.386 "num_base_bdevs_operational": 4, 00:11:22.386 "base_bdevs_list": [ 00:11:22.386 { 00:11:22.386 "name": "NewBaseBdev", 00:11:22.386 "uuid": "f493d2d1-9d1b-4cc8-a8a2-f531ae4af7ee", 00:11:22.386 "is_configured": true, 00:11:22.386 "data_offset": 0, 00:11:22.386 "data_size": 65536 00:11:22.386 }, 00:11:22.386 { 00:11:22.386 "name": "BaseBdev2", 00:11:22.386 "uuid": "f9054832-4cbc-4a50-b6a6-662d1580861b", 00:11:22.386 "is_configured": true, 00:11:22.386 "data_offset": 0, 00:11:22.386 "data_size": 65536 00:11:22.386 }, 00:11:22.386 { 00:11:22.386 "name": "BaseBdev3", 00:11:22.386 "uuid": "2d4790b6-9a08-4f62-b59c-4cc3121ddd4f", 00:11:22.386 "is_configured": true, 00:11:22.386 "data_offset": 0, 00:11:22.386 "data_size": 65536 00:11:22.386 }, 00:11:22.386 { 00:11:22.386 "name": "BaseBdev4", 00:11:22.386 "uuid": "69d37c6a-b0b4-4780-bff1-c21f65db844b", 00:11:22.386 "is_configured": true, 00:11:22.386 "data_offset": 0, 00:11:22.386 "data_size": 65536 00:11:22.386 } 00:11:22.386 ] 00:11:22.386 }' 00:11:22.386 19:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:22.386 19:09:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.955 19:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:22.955 19:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:22.955 19:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:22.955 19:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:22.955 19:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:22.955 19:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:22.955 19:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:22.955 19:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:22.955 19:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.955 19:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.955 [2024-11-27 19:09:32.338509] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:22.955 19:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.955 19:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:22.955 "name": "Existed_Raid", 00:11:22.955 "aliases": [ 00:11:22.955 "80aa1e86-323e-473d-a6a5-d95b8ffa2e9e" 00:11:22.955 ], 00:11:22.955 "product_name": "Raid Volume", 00:11:22.955 "block_size": 512, 00:11:22.955 "num_blocks": 262144, 00:11:22.955 "uuid": "80aa1e86-323e-473d-a6a5-d95b8ffa2e9e", 00:11:22.955 "assigned_rate_limits": { 00:11:22.955 "rw_ios_per_sec": 0, 00:11:22.955 "rw_mbytes_per_sec": 0, 00:11:22.955 "r_mbytes_per_sec": 0, 00:11:22.955 "w_mbytes_per_sec": 0 00:11:22.955 }, 00:11:22.955 "claimed": false, 00:11:22.955 "zoned": false, 00:11:22.955 "supported_io_types": { 00:11:22.955 "read": true, 00:11:22.955 "write": true, 00:11:22.955 "unmap": true, 00:11:22.955 "flush": true, 00:11:22.955 "reset": true, 00:11:22.955 "nvme_admin": false, 00:11:22.955 "nvme_io": false, 00:11:22.955 "nvme_io_md": false, 00:11:22.955 "write_zeroes": true, 00:11:22.955 "zcopy": false, 00:11:22.955 "get_zone_info": false, 00:11:22.955 "zone_management": false, 00:11:22.955 "zone_append": false, 00:11:22.955 "compare": false, 00:11:22.955 "compare_and_write": false, 00:11:22.955 "abort": false, 00:11:22.955 "seek_hole": false, 00:11:22.955 "seek_data": false, 00:11:22.955 "copy": false, 00:11:22.955 "nvme_iov_md": false 00:11:22.955 }, 00:11:22.955 "memory_domains": [ 00:11:22.955 { 00:11:22.955 "dma_device_id": "system", 00:11:22.955 "dma_device_type": 1 00:11:22.955 }, 00:11:22.955 { 00:11:22.955 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:22.955 "dma_device_type": 2 00:11:22.955 }, 00:11:22.955 { 00:11:22.955 "dma_device_id": "system", 00:11:22.955 "dma_device_type": 1 00:11:22.955 }, 00:11:22.955 { 00:11:22.955 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:22.955 "dma_device_type": 2 00:11:22.955 }, 00:11:22.955 { 00:11:22.955 "dma_device_id": "system", 00:11:22.955 "dma_device_type": 1 00:11:22.955 }, 00:11:22.955 { 00:11:22.955 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:22.955 "dma_device_type": 2 00:11:22.955 }, 00:11:22.955 { 00:11:22.956 "dma_device_id": "system", 00:11:22.956 "dma_device_type": 1 00:11:22.956 }, 00:11:22.956 { 00:11:22.956 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:22.956 "dma_device_type": 2 00:11:22.956 } 00:11:22.956 ], 00:11:22.956 "driver_specific": { 00:11:22.956 "raid": { 00:11:22.956 "uuid": "80aa1e86-323e-473d-a6a5-d95b8ffa2e9e", 00:11:22.956 "strip_size_kb": 64, 00:11:22.956 "state": "online", 00:11:22.956 "raid_level": "raid0", 00:11:22.956 "superblock": false, 00:11:22.956 "num_base_bdevs": 4, 00:11:22.956 "num_base_bdevs_discovered": 4, 00:11:22.956 "num_base_bdevs_operational": 4, 00:11:22.956 "base_bdevs_list": [ 00:11:22.956 { 00:11:22.956 "name": "NewBaseBdev", 00:11:22.956 "uuid": "f493d2d1-9d1b-4cc8-a8a2-f531ae4af7ee", 00:11:22.956 "is_configured": true, 00:11:22.956 "data_offset": 0, 00:11:22.956 "data_size": 65536 00:11:22.956 }, 00:11:22.956 { 00:11:22.956 "name": "BaseBdev2", 00:11:22.956 "uuid": "f9054832-4cbc-4a50-b6a6-662d1580861b", 00:11:22.956 "is_configured": true, 00:11:22.956 "data_offset": 0, 00:11:22.956 "data_size": 65536 00:11:22.956 }, 00:11:22.956 { 00:11:22.956 "name": "BaseBdev3", 00:11:22.956 "uuid": "2d4790b6-9a08-4f62-b59c-4cc3121ddd4f", 00:11:22.956 "is_configured": true, 00:11:22.956 "data_offset": 0, 00:11:22.956 "data_size": 65536 00:11:22.956 }, 00:11:22.956 { 00:11:22.956 "name": "BaseBdev4", 00:11:22.956 "uuid": "69d37c6a-b0b4-4780-bff1-c21f65db844b", 00:11:22.956 "is_configured": true, 00:11:22.956 "data_offset": 0, 00:11:22.956 "data_size": 65536 00:11:22.956 } 00:11:22.956 ] 00:11:22.956 } 00:11:22.956 } 00:11:22.956 }' 00:11:22.956 19:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:22.956 19:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:22.956 BaseBdev2 00:11:22.956 BaseBdev3 00:11:22.956 BaseBdev4' 00:11:22.956 19:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:22.956 19:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:22.956 19:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:22.956 19:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:22.956 19:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.956 19:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.956 19:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:22.956 19:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.956 19:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:22.956 19:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:22.956 19:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:22.956 19:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:22.956 19:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:22.956 19:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.956 19:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.956 19:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.956 19:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:22.956 19:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:22.956 19:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:22.956 19:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:22.956 19:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:22.956 19:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.956 19:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.956 19:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.216 19:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:23.216 19:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:23.216 19:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:23.216 19:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:23.216 19:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.216 19:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:23.216 19:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.216 19:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.216 19:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:23.216 19:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:23.216 19:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:23.216 19:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.216 19:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.216 [2024-11-27 19:09:32.657539] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:23.216 [2024-11-27 19:09:32.657624] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:23.216 [2024-11-27 19:09:32.657748] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:23.216 [2024-11-27 19:09:32.657857] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:23.216 [2024-11-27 19:09:32.657904] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:23.217 19:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.217 19:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 69476 00:11:23.217 19:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 69476 ']' 00:11:23.217 19:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 69476 00:11:23.217 19:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:11:23.217 19:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:23.217 19:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69476 00:11:23.217 killing process with pid 69476 00:11:23.217 19:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:23.217 19:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:23.217 19:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69476' 00:11:23.217 19:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 69476 00:11:23.217 [2024-11-27 19:09:32.703062] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:23.217 19:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 69476 00:11:23.785 [2024-11-27 19:09:33.138001] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:25.167 19:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:11:25.167 00:11:25.167 real 0m11.853s 00:11:25.167 user 0m18.442s 00:11:25.167 sys 0m2.305s 00:11:25.167 19:09:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:25.167 ************************************ 00:11:25.167 END TEST raid_state_function_test 00:11:25.167 ************************************ 00:11:25.167 19:09:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.167 19:09:34 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:11:25.167 19:09:34 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:25.167 19:09:34 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:25.167 19:09:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:25.167 ************************************ 00:11:25.167 START TEST raid_state_function_test_sb 00:11:25.167 ************************************ 00:11:25.167 19:09:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 4 true 00:11:25.167 19:09:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:11:25.167 19:09:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:11:25.167 19:09:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:11:25.167 19:09:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:25.167 19:09:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:25.167 19:09:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:25.167 19:09:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:25.167 19:09:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:25.167 19:09:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:25.167 19:09:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:25.167 19:09:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:25.167 19:09:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:25.167 19:09:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:25.167 19:09:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:25.167 19:09:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:25.167 19:09:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:11:25.167 19:09:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:25.167 19:09:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:25.167 19:09:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:25.167 19:09:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:25.167 19:09:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:25.167 19:09:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:25.167 19:09:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:25.167 19:09:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:25.167 19:09:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:11:25.167 19:09:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:11:25.167 19:09:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:11:25.167 19:09:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:11:25.167 19:09:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:11:25.167 19:09:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=70153 00:11:25.167 19:09:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:25.167 Process raid pid: 70153 00:11:25.167 19:09:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 70153' 00:11:25.167 19:09:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 70153 00:11:25.167 19:09:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 70153 ']' 00:11:25.167 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:25.167 19:09:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:25.167 19:09:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:25.167 19:09:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:25.167 19:09:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:25.167 19:09:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.167 [2024-11-27 19:09:34.539989] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:11:25.167 [2024-11-27 19:09:34.540133] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:25.167 [2024-11-27 19:09:34.705433] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:25.427 [2024-11-27 19:09:34.842506] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:25.686 [2024-11-27 19:09:35.080497] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:25.686 [2024-11-27 19:09:35.080556] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:25.946 19:09:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:25.946 19:09:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:11:25.946 19:09:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:25.946 19:09:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.946 19:09:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.946 [2024-11-27 19:09:35.364610] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:25.946 [2024-11-27 19:09:35.364702] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:25.946 [2024-11-27 19:09:35.364714] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:25.946 [2024-11-27 19:09:35.364725] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:25.946 [2024-11-27 19:09:35.364731] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:25.946 [2024-11-27 19:09:35.364743] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:25.946 [2024-11-27 19:09:35.364749] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:25.946 [2024-11-27 19:09:35.364760] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:25.946 19:09:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.946 19:09:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:25.946 19:09:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:25.946 19:09:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:25.946 19:09:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:25.946 19:09:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:25.946 19:09:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:25.946 19:09:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:25.946 19:09:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:25.946 19:09:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:25.946 19:09:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:25.946 19:09:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.946 19:09:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:25.946 19:09:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.946 19:09:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.947 19:09:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.947 19:09:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:25.947 "name": "Existed_Raid", 00:11:25.947 "uuid": "ee4ea698-e0ac-4be3-906b-fac6dbcdde39", 00:11:25.947 "strip_size_kb": 64, 00:11:25.947 "state": "configuring", 00:11:25.947 "raid_level": "raid0", 00:11:25.947 "superblock": true, 00:11:25.947 "num_base_bdevs": 4, 00:11:25.947 "num_base_bdevs_discovered": 0, 00:11:25.947 "num_base_bdevs_operational": 4, 00:11:25.947 "base_bdevs_list": [ 00:11:25.947 { 00:11:25.947 "name": "BaseBdev1", 00:11:25.947 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:25.947 "is_configured": false, 00:11:25.947 "data_offset": 0, 00:11:25.947 "data_size": 0 00:11:25.947 }, 00:11:25.947 { 00:11:25.947 "name": "BaseBdev2", 00:11:25.947 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:25.947 "is_configured": false, 00:11:25.947 "data_offset": 0, 00:11:25.947 "data_size": 0 00:11:25.947 }, 00:11:25.947 { 00:11:25.947 "name": "BaseBdev3", 00:11:25.947 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:25.947 "is_configured": false, 00:11:25.947 "data_offset": 0, 00:11:25.947 "data_size": 0 00:11:25.947 }, 00:11:25.947 { 00:11:25.947 "name": "BaseBdev4", 00:11:25.947 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:25.947 "is_configured": false, 00:11:25.947 "data_offset": 0, 00:11:25.947 "data_size": 0 00:11:25.947 } 00:11:25.947 ] 00:11:25.947 }' 00:11:25.947 19:09:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:25.947 19:09:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.208 19:09:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:26.208 19:09:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.208 19:09:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.208 [2024-11-27 19:09:35.807790] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:26.208 [2024-11-27 19:09:35.807937] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:26.208 19:09:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.208 19:09:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:26.208 19:09:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.208 19:09:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.208 [2024-11-27 19:09:35.819752] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:26.208 [2024-11-27 19:09:35.819848] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:26.208 [2024-11-27 19:09:35.819883] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:26.208 [2024-11-27 19:09:35.819911] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:26.208 [2024-11-27 19:09:35.819938] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:26.208 [2024-11-27 19:09:35.819983] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:26.208 [2024-11-27 19:09:35.820015] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:26.208 [2024-11-27 19:09:35.820042] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:26.208 19:09:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.208 19:09:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:26.208 19:09:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.208 19:09:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.468 [2024-11-27 19:09:35.877058] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:26.468 BaseBdev1 00:11:26.468 19:09:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.468 19:09:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:26.468 19:09:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:26.468 19:09:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:26.468 19:09:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:26.468 19:09:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:26.468 19:09:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:26.468 19:09:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:26.468 19:09:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.468 19:09:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.468 19:09:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.468 19:09:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:26.468 19:09:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.468 19:09:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.468 [ 00:11:26.468 { 00:11:26.468 "name": "BaseBdev1", 00:11:26.468 "aliases": [ 00:11:26.468 "ce2af981-c819-42da-8240-581e09939f46" 00:11:26.468 ], 00:11:26.468 "product_name": "Malloc disk", 00:11:26.468 "block_size": 512, 00:11:26.468 "num_blocks": 65536, 00:11:26.468 "uuid": "ce2af981-c819-42da-8240-581e09939f46", 00:11:26.468 "assigned_rate_limits": { 00:11:26.468 "rw_ios_per_sec": 0, 00:11:26.468 "rw_mbytes_per_sec": 0, 00:11:26.468 "r_mbytes_per_sec": 0, 00:11:26.468 "w_mbytes_per_sec": 0 00:11:26.468 }, 00:11:26.468 "claimed": true, 00:11:26.468 "claim_type": "exclusive_write", 00:11:26.468 "zoned": false, 00:11:26.468 "supported_io_types": { 00:11:26.468 "read": true, 00:11:26.468 "write": true, 00:11:26.468 "unmap": true, 00:11:26.468 "flush": true, 00:11:26.468 "reset": true, 00:11:26.468 "nvme_admin": false, 00:11:26.468 "nvme_io": false, 00:11:26.468 "nvme_io_md": false, 00:11:26.468 "write_zeroes": true, 00:11:26.468 "zcopy": true, 00:11:26.468 "get_zone_info": false, 00:11:26.468 "zone_management": false, 00:11:26.468 "zone_append": false, 00:11:26.468 "compare": false, 00:11:26.468 "compare_and_write": false, 00:11:26.468 "abort": true, 00:11:26.468 "seek_hole": false, 00:11:26.468 "seek_data": false, 00:11:26.468 "copy": true, 00:11:26.468 "nvme_iov_md": false 00:11:26.468 }, 00:11:26.468 "memory_domains": [ 00:11:26.468 { 00:11:26.468 "dma_device_id": "system", 00:11:26.468 "dma_device_type": 1 00:11:26.468 }, 00:11:26.468 { 00:11:26.468 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:26.468 "dma_device_type": 2 00:11:26.468 } 00:11:26.468 ], 00:11:26.468 "driver_specific": {} 00:11:26.468 } 00:11:26.468 ] 00:11:26.468 19:09:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.468 19:09:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:26.468 19:09:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:26.468 19:09:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:26.468 19:09:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:26.468 19:09:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:26.468 19:09:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:26.468 19:09:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:26.468 19:09:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:26.468 19:09:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:26.468 19:09:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:26.468 19:09:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:26.468 19:09:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:26.468 19:09:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:26.468 19:09:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.468 19:09:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.468 19:09:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.468 19:09:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:26.468 "name": "Existed_Raid", 00:11:26.468 "uuid": "57676256-cd7a-4fcc-8fd1-84684449461d", 00:11:26.468 "strip_size_kb": 64, 00:11:26.468 "state": "configuring", 00:11:26.468 "raid_level": "raid0", 00:11:26.468 "superblock": true, 00:11:26.468 "num_base_bdevs": 4, 00:11:26.468 "num_base_bdevs_discovered": 1, 00:11:26.468 "num_base_bdevs_operational": 4, 00:11:26.468 "base_bdevs_list": [ 00:11:26.468 { 00:11:26.468 "name": "BaseBdev1", 00:11:26.468 "uuid": "ce2af981-c819-42da-8240-581e09939f46", 00:11:26.468 "is_configured": true, 00:11:26.468 "data_offset": 2048, 00:11:26.468 "data_size": 63488 00:11:26.468 }, 00:11:26.468 { 00:11:26.469 "name": "BaseBdev2", 00:11:26.469 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:26.469 "is_configured": false, 00:11:26.469 "data_offset": 0, 00:11:26.469 "data_size": 0 00:11:26.469 }, 00:11:26.469 { 00:11:26.469 "name": "BaseBdev3", 00:11:26.469 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:26.469 "is_configured": false, 00:11:26.469 "data_offset": 0, 00:11:26.469 "data_size": 0 00:11:26.469 }, 00:11:26.469 { 00:11:26.469 "name": "BaseBdev4", 00:11:26.469 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:26.469 "is_configured": false, 00:11:26.469 "data_offset": 0, 00:11:26.469 "data_size": 0 00:11:26.469 } 00:11:26.469 ] 00:11:26.469 }' 00:11:26.469 19:09:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:26.469 19:09:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.039 19:09:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:27.039 19:09:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.039 19:09:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.039 [2024-11-27 19:09:36.372286] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:27.039 [2024-11-27 19:09:36.372380] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:27.039 19:09:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.039 19:09:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:27.039 19:09:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.039 19:09:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.039 [2024-11-27 19:09:36.384382] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:27.039 [2024-11-27 19:09:36.386677] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:27.039 [2024-11-27 19:09:36.386817] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:27.039 [2024-11-27 19:09:36.386834] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:27.039 [2024-11-27 19:09:36.386846] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:27.039 [2024-11-27 19:09:36.386853] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:27.039 [2024-11-27 19:09:36.386862] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:27.039 19:09:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.039 19:09:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:27.039 19:09:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:27.039 19:09:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:27.039 19:09:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:27.039 19:09:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:27.039 19:09:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:27.039 19:09:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:27.039 19:09:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:27.039 19:09:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:27.039 19:09:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:27.039 19:09:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:27.039 19:09:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:27.039 19:09:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.039 19:09:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:27.039 19:09:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.039 19:09:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.039 19:09:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.039 19:09:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:27.039 "name": "Existed_Raid", 00:11:27.039 "uuid": "ed3c3782-6c49-48d2-9863-6bd3891e0b00", 00:11:27.039 "strip_size_kb": 64, 00:11:27.039 "state": "configuring", 00:11:27.039 "raid_level": "raid0", 00:11:27.039 "superblock": true, 00:11:27.039 "num_base_bdevs": 4, 00:11:27.039 "num_base_bdevs_discovered": 1, 00:11:27.039 "num_base_bdevs_operational": 4, 00:11:27.039 "base_bdevs_list": [ 00:11:27.039 { 00:11:27.039 "name": "BaseBdev1", 00:11:27.039 "uuid": "ce2af981-c819-42da-8240-581e09939f46", 00:11:27.039 "is_configured": true, 00:11:27.039 "data_offset": 2048, 00:11:27.039 "data_size": 63488 00:11:27.039 }, 00:11:27.039 { 00:11:27.039 "name": "BaseBdev2", 00:11:27.039 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:27.039 "is_configured": false, 00:11:27.039 "data_offset": 0, 00:11:27.039 "data_size": 0 00:11:27.039 }, 00:11:27.039 { 00:11:27.039 "name": "BaseBdev3", 00:11:27.039 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:27.039 "is_configured": false, 00:11:27.039 "data_offset": 0, 00:11:27.039 "data_size": 0 00:11:27.039 }, 00:11:27.039 { 00:11:27.039 "name": "BaseBdev4", 00:11:27.039 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:27.039 "is_configured": false, 00:11:27.039 "data_offset": 0, 00:11:27.039 "data_size": 0 00:11:27.039 } 00:11:27.039 ] 00:11:27.039 }' 00:11:27.039 19:09:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:27.039 19:09:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.300 19:09:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:27.300 19:09:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.300 19:09:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.300 [2024-11-27 19:09:36.891225] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:27.300 BaseBdev2 00:11:27.300 19:09:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.300 19:09:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:27.300 19:09:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:27.300 19:09:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:27.300 19:09:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:27.300 19:09:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:27.300 19:09:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:27.300 19:09:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:27.300 19:09:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.300 19:09:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.300 19:09:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.300 19:09:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:27.300 19:09:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.300 19:09:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.300 [ 00:11:27.300 { 00:11:27.300 "name": "BaseBdev2", 00:11:27.300 "aliases": [ 00:11:27.300 "e7c0b6b9-71cc-46ce-8a07-d99d70652237" 00:11:27.300 ], 00:11:27.300 "product_name": "Malloc disk", 00:11:27.300 "block_size": 512, 00:11:27.300 "num_blocks": 65536, 00:11:27.300 "uuid": "e7c0b6b9-71cc-46ce-8a07-d99d70652237", 00:11:27.300 "assigned_rate_limits": { 00:11:27.300 "rw_ios_per_sec": 0, 00:11:27.300 "rw_mbytes_per_sec": 0, 00:11:27.300 "r_mbytes_per_sec": 0, 00:11:27.300 "w_mbytes_per_sec": 0 00:11:27.300 }, 00:11:27.300 "claimed": true, 00:11:27.300 "claim_type": "exclusive_write", 00:11:27.300 "zoned": false, 00:11:27.300 "supported_io_types": { 00:11:27.300 "read": true, 00:11:27.300 "write": true, 00:11:27.300 "unmap": true, 00:11:27.300 "flush": true, 00:11:27.300 "reset": true, 00:11:27.300 "nvme_admin": false, 00:11:27.300 "nvme_io": false, 00:11:27.300 "nvme_io_md": false, 00:11:27.300 "write_zeroes": true, 00:11:27.300 "zcopy": true, 00:11:27.300 "get_zone_info": false, 00:11:27.300 "zone_management": false, 00:11:27.300 "zone_append": false, 00:11:27.300 "compare": false, 00:11:27.300 "compare_and_write": false, 00:11:27.300 "abort": true, 00:11:27.300 "seek_hole": false, 00:11:27.300 "seek_data": false, 00:11:27.300 "copy": true, 00:11:27.300 "nvme_iov_md": false 00:11:27.300 }, 00:11:27.300 "memory_domains": [ 00:11:27.300 { 00:11:27.300 "dma_device_id": "system", 00:11:27.300 "dma_device_type": 1 00:11:27.300 }, 00:11:27.300 { 00:11:27.300 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:27.300 "dma_device_type": 2 00:11:27.300 } 00:11:27.300 ], 00:11:27.300 "driver_specific": {} 00:11:27.300 } 00:11:27.300 ] 00:11:27.300 19:09:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.300 19:09:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:27.300 19:09:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:27.300 19:09:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:27.300 19:09:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:27.300 19:09:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:27.300 19:09:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:27.300 19:09:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:27.300 19:09:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:27.560 19:09:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:27.560 19:09:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:27.560 19:09:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:27.560 19:09:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:27.560 19:09:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:27.560 19:09:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.560 19:09:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:27.560 19:09:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.560 19:09:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.560 19:09:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.560 19:09:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:27.561 "name": "Existed_Raid", 00:11:27.561 "uuid": "ed3c3782-6c49-48d2-9863-6bd3891e0b00", 00:11:27.561 "strip_size_kb": 64, 00:11:27.561 "state": "configuring", 00:11:27.561 "raid_level": "raid0", 00:11:27.561 "superblock": true, 00:11:27.561 "num_base_bdevs": 4, 00:11:27.561 "num_base_bdevs_discovered": 2, 00:11:27.561 "num_base_bdevs_operational": 4, 00:11:27.561 "base_bdevs_list": [ 00:11:27.561 { 00:11:27.561 "name": "BaseBdev1", 00:11:27.561 "uuid": "ce2af981-c819-42da-8240-581e09939f46", 00:11:27.561 "is_configured": true, 00:11:27.561 "data_offset": 2048, 00:11:27.561 "data_size": 63488 00:11:27.561 }, 00:11:27.561 { 00:11:27.561 "name": "BaseBdev2", 00:11:27.561 "uuid": "e7c0b6b9-71cc-46ce-8a07-d99d70652237", 00:11:27.561 "is_configured": true, 00:11:27.561 "data_offset": 2048, 00:11:27.561 "data_size": 63488 00:11:27.561 }, 00:11:27.561 { 00:11:27.561 "name": "BaseBdev3", 00:11:27.561 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:27.561 "is_configured": false, 00:11:27.561 "data_offset": 0, 00:11:27.561 "data_size": 0 00:11:27.561 }, 00:11:27.561 { 00:11:27.561 "name": "BaseBdev4", 00:11:27.561 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:27.561 "is_configured": false, 00:11:27.561 "data_offset": 0, 00:11:27.561 "data_size": 0 00:11:27.561 } 00:11:27.561 ] 00:11:27.561 }' 00:11:27.561 19:09:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:27.561 19:09:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.821 19:09:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:27.821 19:09:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.821 19:09:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.821 [2024-11-27 19:09:37.442852] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:27.821 BaseBdev3 00:11:27.821 19:09:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.821 19:09:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:27.821 19:09:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:27.821 19:09:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:27.821 19:09:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:27.821 19:09:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:27.821 19:09:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:27.821 19:09:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:27.821 19:09:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.821 19:09:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.081 19:09:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.081 19:09:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:28.081 19:09:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.081 19:09:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.081 [ 00:11:28.081 { 00:11:28.081 "name": "BaseBdev3", 00:11:28.081 "aliases": [ 00:11:28.081 "63bc462d-7657-4e75-a8ba-3b82034dccd3" 00:11:28.081 ], 00:11:28.081 "product_name": "Malloc disk", 00:11:28.081 "block_size": 512, 00:11:28.081 "num_blocks": 65536, 00:11:28.081 "uuid": "63bc462d-7657-4e75-a8ba-3b82034dccd3", 00:11:28.081 "assigned_rate_limits": { 00:11:28.081 "rw_ios_per_sec": 0, 00:11:28.081 "rw_mbytes_per_sec": 0, 00:11:28.081 "r_mbytes_per_sec": 0, 00:11:28.081 "w_mbytes_per_sec": 0 00:11:28.081 }, 00:11:28.081 "claimed": true, 00:11:28.081 "claim_type": "exclusive_write", 00:11:28.081 "zoned": false, 00:11:28.081 "supported_io_types": { 00:11:28.081 "read": true, 00:11:28.081 "write": true, 00:11:28.081 "unmap": true, 00:11:28.081 "flush": true, 00:11:28.081 "reset": true, 00:11:28.081 "nvme_admin": false, 00:11:28.081 "nvme_io": false, 00:11:28.081 "nvme_io_md": false, 00:11:28.081 "write_zeroes": true, 00:11:28.081 "zcopy": true, 00:11:28.081 "get_zone_info": false, 00:11:28.081 "zone_management": false, 00:11:28.081 "zone_append": false, 00:11:28.081 "compare": false, 00:11:28.081 "compare_and_write": false, 00:11:28.081 "abort": true, 00:11:28.081 "seek_hole": false, 00:11:28.081 "seek_data": false, 00:11:28.081 "copy": true, 00:11:28.081 "nvme_iov_md": false 00:11:28.081 }, 00:11:28.081 "memory_domains": [ 00:11:28.081 { 00:11:28.081 "dma_device_id": "system", 00:11:28.081 "dma_device_type": 1 00:11:28.081 }, 00:11:28.081 { 00:11:28.081 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:28.081 "dma_device_type": 2 00:11:28.081 } 00:11:28.081 ], 00:11:28.081 "driver_specific": {} 00:11:28.081 } 00:11:28.081 ] 00:11:28.081 19:09:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.081 19:09:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:28.081 19:09:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:28.081 19:09:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:28.081 19:09:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:28.081 19:09:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:28.081 19:09:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:28.081 19:09:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:28.081 19:09:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:28.081 19:09:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:28.081 19:09:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:28.081 19:09:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:28.081 19:09:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:28.081 19:09:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:28.081 19:09:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.081 19:09:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.081 19:09:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:28.081 19:09:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.081 19:09:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.081 19:09:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:28.081 "name": "Existed_Raid", 00:11:28.081 "uuid": "ed3c3782-6c49-48d2-9863-6bd3891e0b00", 00:11:28.081 "strip_size_kb": 64, 00:11:28.081 "state": "configuring", 00:11:28.081 "raid_level": "raid0", 00:11:28.081 "superblock": true, 00:11:28.081 "num_base_bdevs": 4, 00:11:28.081 "num_base_bdevs_discovered": 3, 00:11:28.082 "num_base_bdevs_operational": 4, 00:11:28.082 "base_bdevs_list": [ 00:11:28.082 { 00:11:28.082 "name": "BaseBdev1", 00:11:28.082 "uuid": "ce2af981-c819-42da-8240-581e09939f46", 00:11:28.082 "is_configured": true, 00:11:28.082 "data_offset": 2048, 00:11:28.082 "data_size": 63488 00:11:28.082 }, 00:11:28.082 { 00:11:28.082 "name": "BaseBdev2", 00:11:28.082 "uuid": "e7c0b6b9-71cc-46ce-8a07-d99d70652237", 00:11:28.082 "is_configured": true, 00:11:28.082 "data_offset": 2048, 00:11:28.082 "data_size": 63488 00:11:28.082 }, 00:11:28.082 { 00:11:28.082 "name": "BaseBdev3", 00:11:28.082 "uuid": "63bc462d-7657-4e75-a8ba-3b82034dccd3", 00:11:28.082 "is_configured": true, 00:11:28.082 "data_offset": 2048, 00:11:28.082 "data_size": 63488 00:11:28.082 }, 00:11:28.082 { 00:11:28.082 "name": "BaseBdev4", 00:11:28.082 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:28.082 "is_configured": false, 00:11:28.082 "data_offset": 0, 00:11:28.082 "data_size": 0 00:11:28.082 } 00:11:28.082 ] 00:11:28.082 }' 00:11:28.082 19:09:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:28.082 19:09:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.344 19:09:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:28.344 19:09:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.344 19:09:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.344 [2024-11-27 19:09:37.962886] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:28.344 [2024-11-27 19:09:37.963289] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:28.344 [2024-11-27 19:09:37.963311] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:28.344 [2024-11-27 19:09:37.963622] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:28.344 [2024-11-27 19:09:37.963804] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:28.344 [2024-11-27 19:09:37.963818] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:28.344 [2024-11-27 19:09:37.963987] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:28.344 BaseBdev4 00:11:28.344 19:09:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.344 19:09:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:11:28.344 19:09:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:28.344 19:09:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:28.344 19:09:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:28.344 19:09:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:28.344 19:09:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:28.344 19:09:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:28.344 19:09:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.344 19:09:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.344 19:09:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.344 19:09:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:28.344 19:09:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.344 19:09:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.633 [ 00:11:28.633 { 00:11:28.633 "name": "BaseBdev4", 00:11:28.633 "aliases": [ 00:11:28.633 "39907855-5e52-4bee-a35b-4b9be8178b61" 00:11:28.633 ], 00:11:28.633 "product_name": "Malloc disk", 00:11:28.633 "block_size": 512, 00:11:28.633 "num_blocks": 65536, 00:11:28.633 "uuid": "39907855-5e52-4bee-a35b-4b9be8178b61", 00:11:28.633 "assigned_rate_limits": { 00:11:28.633 "rw_ios_per_sec": 0, 00:11:28.633 "rw_mbytes_per_sec": 0, 00:11:28.633 "r_mbytes_per_sec": 0, 00:11:28.633 "w_mbytes_per_sec": 0 00:11:28.633 }, 00:11:28.633 "claimed": true, 00:11:28.633 "claim_type": "exclusive_write", 00:11:28.633 "zoned": false, 00:11:28.633 "supported_io_types": { 00:11:28.633 "read": true, 00:11:28.633 "write": true, 00:11:28.633 "unmap": true, 00:11:28.633 "flush": true, 00:11:28.633 "reset": true, 00:11:28.633 "nvme_admin": false, 00:11:28.633 "nvme_io": false, 00:11:28.633 "nvme_io_md": false, 00:11:28.633 "write_zeroes": true, 00:11:28.633 "zcopy": true, 00:11:28.633 "get_zone_info": false, 00:11:28.633 "zone_management": false, 00:11:28.633 "zone_append": false, 00:11:28.633 "compare": false, 00:11:28.633 "compare_and_write": false, 00:11:28.633 "abort": true, 00:11:28.633 "seek_hole": false, 00:11:28.633 "seek_data": false, 00:11:28.633 "copy": true, 00:11:28.633 "nvme_iov_md": false 00:11:28.633 }, 00:11:28.633 "memory_domains": [ 00:11:28.633 { 00:11:28.633 "dma_device_id": "system", 00:11:28.633 "dma_device_type": 1 00:11:28.633 }, 00:11:28.633 { 00:11:28.633 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:28.633 "dma_device_type": 2 00:11:28.633 } 00:11:28.633 ], 00:11:28.633 "driver_specific": {} 00:11:28.633 } 00:11:28.633 ] 00:11:28.633 19:09:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.633 19:09:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:28.633 19:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:28.633 19:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:28.633 19:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:11:28.633 19:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:28.633 19:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:28.633 19:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:28.633 19:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:28.633 19:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:28.633 19:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:28.633 19:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:28.633 19:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:28.633 19:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:28.633 19:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.633 19:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:28.634 19:09:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.634 19:09:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.634 19:09:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.634 19:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:28.634 "name": "Existed_Raid", 00:11:28.634 "uuid": "ed3c3782-6c49-48d2-9863-6bd3891e0b00", 00:11:28.634 "strip_size_kb": 64, 00:11:28.634 "state": "online", 00:11:28.634 "raid_level": "raid0", 00:11:28.634 "superblock": true, 00:11:28.634 "num_base_bdevs": 4, 00:11:28.634 "num_base_bdevs_discovered": 4, 00:11:28.634 "num_base_bdevs_operational": 4, 00:11:28.634 "base_bdevs_list": [ 00:11:28.634 { 00:11:28.634 "name": "BaseBdev1", 00:11:28.634 "uuid": "ce2af981-c819-42da-8240-581e09939f46", 00:11:28.634 "is_configured": true, 00:11:28.634 "data_offset": 2048, 00:11:28.634 "data_size": 63488 00:11:28.634 }, 00:11:28.634 { 00:11:28.634 "name": "BaseBdev2", 00:11:28.634 "uuid": "e7c0b6b9-71cc-46ce-8a07-d99d70652237", 00:11:28.634 "is_configured": true, 00:11:28.634 "data_offset": 2048, 00:11:28.634 "data_size": 63488 00:11:28.634 }, 00:11:28.634 { 00:11:28.634 "name": "BaseBdev3", 00:11:28.634 "uuid": "63bc462d-7657-4e75-a8ba-3b82034dccd3", 00:11:28.634 "is_configured": true, 00:11:28.634 "data_offset": 2048, 00:11:28.634 "data_size": 63488 00:11:28.634 }, 00:11:28.634 { 00:11:28.634 "name": "BaseBdev4", 00:11:28.634 "uuid": "39907855-5e52-4bee-a35b-4b9be8178b61", 00:11:28.634 "is_configured": true, 00:11:28.634 "data_offset": 2048, 00:11:28.634 "data_size": 63488 00:11:28.634 } 00:11:28.634 ] 00:11:28.634 }' 00:11:28.634 19:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:28.634 19:09:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.906 19:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:28.906 19:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:28.906 19:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:28.906 19:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:28.906 19:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:28.906 19:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:28.906 19:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:28.906 19:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:28.906 19:09:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.906 19:09:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.906 [2024-11-27 19:09:38.450498] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:28.906 19:09:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.906 19:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:28.906 "name": "Existed_Raid", 00:11:28.906 "aliases": [ 00:11:28.906 "ed3c3782-6c49-48d2-9863-6bd3891e0b00" 00:11:28.906 ], 00:11:28.906 "product_name": "Raid Volume", 00:11:28.906 "block_size": 512, 00:11:28.906 "num_blocks": 253952, 00:11:28.906 "uuid": "ed3c3782-6c49-48d2-9863-6bd3891e0b00", 00:11:28.906 "assigned_rate_limits": { 00:11:28.906 "rw_ios_per_sec": 0, 00:11:28.906 "rw_mbytes_per_sec": 0, 00:11:28.906 "r_mbytes_per_sec": 0, 00:11:28.906 "w_mbytes_per_sec": 0 00:11:28.906 }, 00:11:28.906 "claimed": false, 00:11:28.906 "zoned": false, 00:11:28.906 "supported_io_types": { 00:11:28.906 "read": true, 00:11:28.906 "write": true, 00:11:28.906 "unmap": true, 00:11:28.906 "flush": true, 00:11:28.906 "reset": true, 00:11:28.906 "nvme_admin": false, 00:11:28.906 "nvme_io": false, 00:11:28.906 "nvme_io_md": false, 00:11:28.906 "write_zeroes": true, 00:11:28.906 "zcopy": false, 00:11:28.906 "get_zone_info": false, 00:11:28.906 "zone_management": false, 00:11:28.906 "zone_append": false, 00:11:28.906 "compare": false, 00:11:28.906 "compare_and_write": false, 00:11:28.906 "abort": false, 00:11:28.906 "seek_hole": false, 00:11:28.906 "seek_data": false, 00:11:28.907 "copy": false, 00:11:28.907 "nvme_iov_md": false 00:11:28.907 }, 00:11:28.907 "memory_domains": [ 00:11:28.907 { 00:11:28.907 "dma_device_id": "system", 00:11:28.907 "dma_device_type": 1 00:11:28.907 }, 00:11:28.907 { 00:11:28.907 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:28.907 "dma_device_type": 2 00:11:28.907 }, 00:11:28.907 { 00:11:28.907 "dma_device_id": "system", 00:11:28.907 "dma_device_type": 1 00:11:28.907 }, 00:11:28.907 { 00:11:28.907 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:28.907 "dma_device_type": 2 00:11:28.907 }, 00:11:28.907 { 00:11:28.907 "dma_device_id": "system", 00:11:28.907 "dma_device_type": 1 00:11:28.907 }, 00:11:28.907 { 00:11:28.907 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:28.907 "dma_device_type": 2 00:11:28.907 }, 00:11:28.907 { 00:11:28.907 "dma_device_id": "system", 00:11:28.907 "dma_device_type": 1 00:11:28.907 }, 00:11:28.907 { 00:11:28.907 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:28.907 "dma_device_type": 2 00:11:28.907 } 00:11:28.907 ], 00:11:28.907 "driver_specific": { 00:11:28.907 "raid": { 00:11:28.907 "uuid": "ed3c3782-6c49-48d2-9863-6bd3891e0b00", 00:11:28.907 "strip_size_kb": 64, 00:11:28.907 "state": "online", 00:11:28.907 "raid_level": "raid0", 00:11:28.907 "superblock": true, 00:11:28.907 "num_base_bdevs": 4, 00:11:28.907 "num_base_bdevs_discovered": 4, 00:11:28.907 "num_base_bdevs_operational": 4, 00:11:28.907 "base_bdevs_list": [ 00:11:28.907 { 00:11:28.907 "name": "BaseBdev1", 00:11:28.907 "uuid": "ce2af981-c819-42da-8240-581e09939f46", 00:11:28.907 "is_configured": true, 00:11:28.907 "data_offset": 2048, 00:11:28.907 "data_size": 63488 00:11:28.907 }, 00:11:28.907 { 00:11:28.907 "name": "BaseBdev2", 00:11:28.907 "uuid": "e7c0b6b9-71cc-46ce-8a07-d99d70652237", 00:11:28.907 "is_configured": true, 00:11:28.907 "data_offset": 2048, 00:11:28.907 "data_size": 63488 00:11:28.907 }, 00:11:28.907 { 00:11:28.907 "name": "BaseBdev3", 00:11:28.907 "uuid": "63bc462d-7657-4e75-a8ba-3b82034dccd3", 00:11:28.907 "is_configured": true, 00:11:28.907 "data_offset": 2048, 00:11:28.907 "data_size": 63488 00:11:28.907 }, 00:11:28.907 { 00:11:28.907 "name": "BaseBdev4", 00:11:28.907 "uuid": "39907855-5e52-4bee-a35b-4b9be8178b61", 00:11:28.907 "is_configured": true, 00:11:28.907 "data_offset": 2048, 00:11:28.907 "data_size": 63488 00:11:28.907 } 00:11:28.907 ] 00:11:28.907 } 00:11:28.907 } 00:11:28.907 }' 00:11:28.907 19:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:28.907 19:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:28.907 BaseBdev2 00:11:28.907 BaseBdev3 00:11:28.907 BaseBdev4' 00:11:28.907 19:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:29.167 19:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:29.167 19:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:29.167 19:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:29.167 19:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:29.167 19:09:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.167 19:09:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.167 19:09:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.167 19:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:29.167 19:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:29.167 19:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:29.167 19:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:29.167 19:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:29.167 19:09:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.167 19:09:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.167 19:09:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.167 19:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:29.167 19:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:29.167 19:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:29.167 19:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:29.167 19:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:29.167 19:09:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.167 19:09:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.167 19:09:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.167 19:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:29.167 19:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:29.167 19:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:29.167 19:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:29.167 19:09:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.167 19:09:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.167 19:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:29.167 19:09:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.167 19:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:29.167 19:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:29.167 19:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:29.167 19:09:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.167 19:09:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.167 [2024-11-27 19:09:38.741724] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:29.167 [2024-11-27 19:09:38.741762] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:29.167 [2024-11-27 19:09:38.741822] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:29.427 19:09:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.427 19:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:29.427 19:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:11:29.427 19:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:29.427 19:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:11:29.427 19:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:11:29.427 19:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:11:29.427 19:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:29.427 19:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:11:29.427 19:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:29.427 19:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:29.427 19:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:29.427 19:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:29.427 19:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:29.427 19:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:29.427 19:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:29.427 19:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:29.427 19:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:29.427 19:09:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.427 19:09:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.427 19:09:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.427 19:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:29.427 "name": "Existed_Raid", 00:11:29.427 "uuid": "ed3c3782-6c49-48d2-9863-6bd3891e0b00", 00:11:29.427 "strip_size_kb": 64, 00:11:29.427 "state": "offline", 00:11:29.427 "raid_level": "raid0", 00:11:29.427 "superblock": true, 00:11:29.427 "num_base_bdevs": 4, 00:11:29.427 "num_base_bdevs_discovered": 3, 00:11:29.427 "num_base_bdevs_operational": 3, 00:11:29.427 "base_bdevs_list": [ 00:11:29.427 { 00:11:29.427 "name": null, 00:11:29.427 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:29.427 "is_configured": false, 00:11:29.427 "data_offset": 0, 00:11:29.427 "data_size": 63488 00:11:29.427 }, 00:11:29.427 { 00:11:29.427 "name": "BaseBdev2", 00:11:29.427 "uuid": "e7c0b6b9-71cc-46ce-8a07-d99d70652237", 00:11:29.427 "is_configured": true, 00:11:29.427 "data_offset": 2048, 00:11:29.427 "data_size": 63488 00:11:29.427 }, 00:11:29.427 { 00:11:29.427 "name": "BaseBdev3", 00:11:29.427 "uuid": "63bc462d-7657-4e75-a8ba-3b82034dccd3", 00:11:29.427 "is_configured": true, 00:11:29.427 "data_offset": 2048, 00:11:29.427 "data_size": 63488 00:11:29.427 }, 00:11:29.427 { 00:11:29.427 "name": "BaseBdev4", 00:11:29.427 "uuid": "39907855-5e52-4bee-a35b-4b9be8178b61", 00:11:29.427 "is_configured": true, 00:11:29.427 "data_offset": 2048, 00:11:29.427 "data_size": 63488 00:11:29.427 } 00:11:29.427 ] 00:11:29.427 }' 00:11:29.427 19:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:29.427 19:09:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.686 19:09:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:29.686 19:09:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:29.686 19:09:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:29.686 19:09:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:29.686 19:09:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.686 19:09:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.686 19:09:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.686 19:09:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:29.686 19:09:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:29.686 19:09:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:29.686 19:09:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.686 19:09:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.686 [2024-11-27 19:09:39.319359] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:29.946 19:09:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.946 19:09:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:29.946 19:09:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:29.946 19:09:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:29.946 19:09:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:29.946 19:09:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.946 19:09:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.946 19:09:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.946 19:09:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:29.946 19:09:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:29.946 19:09:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:29.946 19:09:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.946 19:09:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.946 [2024-11-27 19:09:39.486060] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:30.206 19:09:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.206 19:09:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:30.206 19:09:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:30.206 19:09:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:30.206 19:09:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:30.206 19:09:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.206 19:09:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.206 19:09:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.206 19:09:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:30.206 19:09:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:30.206 19:09:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:11:30.206 19:09:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.206 19:09:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.206 [2024-11-27 19:09:39.652519] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:11:30.206 [2024-11-27 19:09:39.652675] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:30.206 19:09:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.206 19:09:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:30.206 19:09:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:30.206 19:09:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:30.206 19:09:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:30.206 19:09:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.206 19:09:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.206 19:09:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.206 19:09:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:30.206 19:09:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:30.206 19:09:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:11:30.206 19:09:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:30.206 19:09:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:30.206 19:09:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:30.206 19:09:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.206 19:09:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.467 BaseBdev2 00:11:30.467 19:09:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.467 19:09:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:30.467 19:09:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:30.467 19:09:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:30.467 19:09:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:30.467 19:09:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:30.467 19:09:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:30.467 19:09:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:30.467 19:09:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.467 19:09:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.467 19:09:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.467 19:09:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:30.467 19:09:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.467 19:09:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.467 [ 00:11:30.467 { 00:11:30.467 "name": "BaseBdev2", 00:11:30.467 "aliases": [ 00:11:30.467 "e3b2295f-dfbb-4488-840f-f9dc29893546" 00:11:30.467 ], 00:11:30.467 "product_name": "Malloc disk", 00:11:30.467 "block_size": 512, 00:11:30.467 "num_blocks": 65536, 00:11:30.467 "uuid": "e3b2295f-dfbb-4488-840f-f9dc29893546", 00:11:30.467 "assigned_rate_limits": { 00:11:30.467 "rw_ios_per_sec": 0, 00:11:30.467 "rw_mbytes_per_sec": 0, 00:11:30.467 "r_mbytes_per_sec": 0, 00:11:30.467 "w_mbytes_per_sec": 0 00:11:30.467 }, 00:11:30.467 "claimed": false, 00:11:30.467 "zoned": false, 00:11:30.467 "supported_io_types": { 00:11:30.467 "read": true, 00:11:30.467 "write": true, 00:11:30.467 "unmap": true, 00:11:30.467 "flush": true, 00:11:30.467 "reset": true, 00:11:30.467 "nvme_admin": false, 00:11:30.467 "nvme_io": false, 00:11:30.467 "nvme_io_md": false, 00:11:30.467 "write_zeroes": true, 00:11:30.467 "zcopy": true, 00:11:30.467 "get_zone_info": false, 00:11:30.467 "zone_management": false, 00:11:30.467 "zone_append": false, 00:11:30.467 "compare": false, 00:11:30.467 "compare_and_write": false, 00:11:30.467 "abort": true, 00:11:30.467 "seek_hole": false, 00:11:30.467 "seek_data": false, 00:11:30.467 "copy": true, 00:11:30.467 "nvme_iov_md": false 00:11:30.467 }, 00:11:30.467 "memory_domains": [ 00:11:30.467 { 00:11:30.467 "dma_device_id": "system", 00:11:30.467 "dma_device_type": 1 00:11:30.467 }, 00:11:30.467 { 00:11:30.467 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:30.467 "dma_device_type": 2 00:11:30.467 } 00:11:30.467 ], 00:11:30.467 "driver_specific": {} 00:11:30.467 } 00:11:30.467 ] 00:11:30.467 19:09:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.467 19:09:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:30.467 19:09:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:30.467 19:09:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:30.467 19:09:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:30.467 19:09:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.467 19:09:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.467 BaseBdev3 00:11:30.467 19:09:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.467 19:09:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:30.467 19:09:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:30.467 19:09:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:30.467 19:09:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:30.467 19:09:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:30.467 19:09:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:30.467 19:09:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:30.467 19:09:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.467 19:09:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.467 19:09:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.467 19:09:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:30.467 19:09:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.467 19:09:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.467 [ 00:11:30.467 { 00:11:30.467 "name": "BaseBdev3", 00:11:30.467 "aliases": [ 00:11:30.467 "514f9e52-3290-4d28-8b76-61e37a35a4d7" 00:11:30.467 ], 00:11:30.467 "product_name": "Malloc disk", 00:11:30.467 "block_size": 512, 00:11:30.467 "num_blocks": 65536, 00:11:30.467 "uuid": "514f9e52-3290-4d28-8b76-61e37a35a4d7", 00:11:30.467 "assigned_rate_limits": { 00:11:30.467 "rw_ios_per_sec": 0, 00:11:30.467 "rw_mbytes_per_sec": 0, 00:11:30.467 "r_mbytes_per_sec": 0, 00:11:30.467 "w_mbytes_per_sec": 0 00:11:30.467 }, 00:11:30.467 "claimed": false, 00:11:30.467 "zoned": false, 00:11:30.467 "supported_io_types": { 00:11:30.467 "read": true, 00:11:30.467 "write": true, 00:11:30.467 "unmap": true, 00:11:30.467 "flush": true, 00:11:30.467 "reset": true, 00:11:30.467 "nvme_admin": false, 00:11:30.467 "nvme_io": false, 00:11:30.467 "nvme_io_md": false, 00:11:30.467 "write_zeroes": true, 00:11:30.467 "zcopy": true, 00:11:30.467 "get_zone_info": false, 00:11:30.467 "zone_management": false, 00:11:30.467 "zone_append": false, 00:11:30.467 "compare": false, 00:11:30.467 "compare_and_write": false, 00:11:30.467 "abort": true, 00:11:30.467 "seek_hole": false, 00:11:30.468 "seek_data": false, 00:11:30.468 "copy": true, 00:11:30.468 "nvme_iov_md": false 00:11:30.468 }, 00:11:30.468 "memory_domains": [ 00:11:30.468 { 00:11:30.468 "dma_device_id": "system", 00:11:30.468 "dma_device_type": 1 00:11:30.468 }, 00:11:30.468 { 00:11:30.468 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:30.468 "dma_device_type": 2 00:11:30.468 } 00:11:30.468 ], 00:11:30.468 "driver_specific": {} 00:11:30.468 } 00:11:30.468 ] 00:11:30.468 19:09:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.468 19:09:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:30.468 19:09:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:30.468 19:09:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:30.468 19:09:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:30.468 19:09:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.468 19:09:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.468 BaseBdev4 00:11:30.468 19:09:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.468 19:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:11:30.468 19:09:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:30.468 19:09:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:30.468 19:09:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:30.468 19:09:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:30.468 19:09:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:30.468 19:09:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:30.468 19:09:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.468 19:09:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.468 19:09:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.468 19:09:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:30.468 19:09:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.468 19:09:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.468 [ 00:11:30.468 { 00:11:30.468 "name": "BaseBdev4", 00:11:30.468 "aliases": [ 00:11:30.468 "346507db-db54-4d4e-87e1-f56dc4812daf" 00:11:30.468 ], 00:11:30.468 "product_name": "Malloc disk", 00:11:30.468 "block_size": 512, 00:11:30.468 "num_blocks": 65536, 00:11:30.468 "uuid": "346507db-db54-4d4e-87e1-f56dc4812daf", 00:11:30.468 "assigned_rate_limits": { 00:11:30.468 "rw_ios_per_sec": 0, 00:11:30.468 "rw_mbytes_per_sec": 0, 00:11:30.468 "r_mbytes_per_sec": 0, 00:11:30.468 "w_mbytes_per_sec": 0 00:11:30.468 }, 00:11:30.468 "claimed": false, 00:11:30.468 "zoned": false, 00:11:30.468 "supported_io_types": { 00:11:30.468 "read": true, 00:11:30.468 "write": true, 00:11:30.468 "unmap": true, 00:11:30.468 "flush": true, 00:11:30.468 "reset": true, 00:11:30.468 "nvme_admin": false, 00:11:30.468 "nvme_io": false, 00:11:30.468 "nvme_io_md": false, 00:11:30.468 "write_zeroes": true, 00:11:30.468 "zcopy": true, 00:11:30.468 "get_zone_info": false, 00:11:30.468 "zone_management": false, 00:11:30.468 "zone_append": false, 00:11:30.468 "compare": false, 00:11:30.468 "compare_and_write": false, 00:11:30.468 "abort": true, 00:11:30.468 "seek_hole": false, 00:11:30.468 "seek_data": false, 00:11:30.468 "copy": true, 00:11:30.468 "nvme_iov_md": false 00:11:30.468 }, 00:11:30.468 "memory_domains": [ 00:11:30.468 { 00:11:30.468 "dma_device_id": "system", 00:11:30.468 "dma_device_type": 1 00:11:30.468 }, 00:11:30.468 { 00:11:30.468 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:30.468 "dma_device_type": 2 00:11:30.468 } 00:11:30.468 ], 00:11:30.468 "driver_specific": {} 00:11:30.468 } 00:11:30.468 ] 00:11:30.468 19:09:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.468 19:09:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:30.468 19:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:30.468 19:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:30.468 19:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:30.468 19:09:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.468 19:09:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.468 [2024-11-27 19:09:40.081448] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:30.468 [2024-11-27 19:09:40.081589] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:30.468 [2024-11-27 19:09:40.081638] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:30.468 [2024-11-27 19:09:40.084032] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:30.468 [2024-11-27 19:09:40.084166] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:30.468 19:09:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.468 19:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:30.468 19:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:30.468 19:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:30.468 19:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:30.468 19:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:30.468 19:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:30.468 19:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:30.468 19:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:30.468 19:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:30.468 19:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:30.468 19:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:30.468 19:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:30.468 19:09:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.468 19:09:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.728 19:09:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.728 19:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:30.728 "name": "Existed_Raid", 00:11:30.728 "uuid": "dcd96668-7a7c-484f-8915-67ff81b3fb96", 00:11:30.729 "strip_size_kb": 64, 00:11:30.729 "state": "configuring", 00:11:30.729 "raid_level": "raid0", 00:11:30.729 "superblock": true, 00:11:30.729 "num_base_bdevs": 4, 00:11:30.729 "num_base_bdevs_discovered": 3, 00:11:30.729 "num_base_bdevs_operational": 4, 00:11:30.729 "base_bdevs_list": [ 00:11:30.729 { 00:11:30.729 "name": "BaseBdev1", 00:11:30.729 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:30.729 "is_configured": false, 00:11:30.729 "data_offset": 0, 00:11:30.729 "data_size": 0 00:11:30.729 }, 00:11:30.729 { 00:11:30.729 "name": "BaseBdev2", 00:11:30.729 "uuid": "e3b2295f-dfbb-4488-840f-f9dc29893546", 00:11:30.729 "is_configured": true, 00:11:30.729 "data_offset": 2048, 00:11:30.729 "data_size": 63488 00:11:30.729 }, 00:11:30.729 { 00:11:30.729 "name": "BaseBdev3", 00:11:30.729 "uuid": "514f9e52-3290-4d28-8b76-61e37a35a4d7", 00:11:30.729 "is_configured": true, 00:11:30.729 "data_offset": 2048, 00:11:30.729 "data_size": 63488 00:11:30.729 }, 00:11:30.729 { 00:11:30.729 "name": "BaseBdev4", 00:11:30.729 "uuid": "346507db-db54-4d4e-87e1-f56dc4812daf", 00:11:30.729 "is_configured": true, 00:11:30.729 "data_offset": 2048, 00:11:30.729 "data_size": 63488 00:11:30.729 } 00:11:30.729 ] 00:11:30.729 }' 00:11:30.729 19:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:30.729 19:09:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.989 19:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:30.989 19:09:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.989 19:09:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.989 [2024-11-27 19:09:40.508765] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:30.989 19:09:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.989 19:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:30.989 19:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:30.989 19:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:30.989 19:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:30.989 19:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:30.989 19:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:30.989 19:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:30.989 19:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:30.989 19:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:30.989 19:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:30.989 19:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:30.989 19:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:30.989 19:09:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.989 19:09:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.989 19:09:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.989 19:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:30.989 "name": "Existed_Raid", 00:11:30.989 "uuid": "dcd96668-7a7c-484f-8915-67ff81b3fb96", 00:11:30.989 "strip_size_kb": 64, 00:11:30.989 "state": "configuring", 00:11:30.989 "raid_level": "raid0", 00:11:30.989 "superblock": true, 00:11:30.989 "num_base_bdevs": 4, 00:11:30.989 "num_base_bdevs_discovered": 2, 00:11:30.989 "num_base_bdevs_operational": 4, 00:11:30.989 "base_bdevs_list": [ 00:11:30.989 { 00:11:30.989 "name": "BaseBdev1", 00:11:30.989 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:30.989 "is_configured": false, 00:11:30.989 "data_offset": 0, 00:11:30.989 "data_size": 0 00:11:30.989 }, 00:11:30.989 { 00:11:30.989 "name": null, 00:11:30.989 "uuid": "e3b2295f-dfbb-4488-840f-f9dc29893546", 00:11:30.989 "is_configured": false, 00:11:30.989 "data_offset": 0, 00:11:30.989 "data_size": 63488 00:11:30.989 }, 00:11:30.989 { 00:11:30.989 "name": "BaseBdev3", 00:11:30.989 "uuid": "514f9e52-3290-4d28-8b76-61e37a35a4d7", 00:11:30.989 "is_configured": true, 00:11:30.989 "data_offset": 2048, 00:11:30.989 "data_size": 63488 00:11:30.989 }, 00:11:30.989 { 00:11:30.989 "name": "BaseBdev4", 00:11:30.989 "uuid": "346507db-db54-4d4e-87e1-f56dc4812daf", 00:11:30.989 "is_configured": true, 00:11:30.989 "data_offset": 2048, 00:11:30.989 "data_size": 63488 00:11:30.989 } 00:11:30.989 ] 00:11:30.989 }' 00:11:30.989 19:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:30.989 19:09:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.560 19:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:31.560 19:09:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.560 19:09:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.560 19:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:31.560 19:09:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.560 19:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:31.560 19:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:31.560 19:09:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.560 19:09:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.560 [2024-11-27 19:09:41.051217] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:31.560 BaseBdev1 00:11:31.560 19:09:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.560 19:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:31.560 19:09:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:31.560 19:09:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:31.560 19:09:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:31.560 19:09:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:31.560 19:09:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:31.560 19:09:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:31.560 19:09:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.560 19:09:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.560 19:09:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.560 19:09:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:31.560 19:09:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.560 19:09:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.560 [ 00:11:31.560 { 00:11:31.560 "name": "BaseBdev1", 00:11:31.560 "aliases": [ 00:11:31.560 "724c6591-7cce-42a4-83b8-7cd9817b6e5e" 00:11:31.560 ], 00:11:31.560 "product_name": "Malloc disk", 00:11:31.560 "block_size": 512, 00:11:31.560 "num_blocks": 65536, 00:11:31.560 "uuid": "724c6591-7cce-42a4-83b8-7cd9817b6e5e", 00:11:31.560 "assigned_rate_limits": { 00:11:31.560 "rw_ios_per_sec": 0, 00:11:31.560 "rw_mbytes_per_sec": 0, 00:11:31.560 "r_mbytes_per_sec": 0, 00:11:31.560 "w_mbytes_per_sec": 0 00:11:31.560 }, 00:11:31.560 "claimed": true, 00:11:31.560 "claim_type": "exclusive_write", 00:11:31.560 "zoned": false, 00:11:31.560 "supported_io_types": { 00:11:31.560 "read": true, 00:11:31.560 "write": true, 00:11:31.560 "unmap": true, 00:11:31.560 "flush": true, 00:11:31.560 "reset": true, 00:11:31.560 "nvme_admin": false, 00:11:31.560 "nvme_io": false, 00:11:31.560 "nvme_io_md": false, 00:11:31.560 "write_zeroes": true, 00:11:31.560 "zcopy": true, 00:11:31.560 "get_zone_info": false, 00:11:31.560 "zone_management": false, 00:11:31.560 "zone_append": false, 00:11:31.560 "compare": false, 00:11:31.560 "compare_and_write": false, 00:11:31.560 "abort": true, 00:11:31.560 "seek_hole": false, 00:11:31.560 "seek_data": false, 00:11:31.560 "copy": true, 00:11:31.560 "nvme_iov_md": false 00:11:31.560 }, 00:11:31.560 "memory_domains": [ 00:11:31.560 { 00:11:31.560 "dma_device_id": "system", 00:11:31.560 "dma_device_type": 1 00:11:31.560 }, 00:11:31.560 { 00:11:31.560 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:31.560 "dma_device_type": 2 00:11:31.560 } 00:11:31.560 ], 00:11:31.560 "driver_specific": {} 00:11:31.560 } 00:11:31.560 ] 00:11:31.560 19:09:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.560 19:09:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:31.560 19:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:31.560 19:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:31.560 19:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:31.560 19:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:31.560 19:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:31.560 19:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:31.560 19:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:31.560 19:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:31.560 19:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:31.560 19:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:31.560 19:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:31.560 19:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:31.560 19:09:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.560 19:09:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.560 19:09:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.560 19:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:31.560 "name": "Existed_Raid", 00:11:31.560 "uuid": "dcd96668-7a7c-484f-8915-67ff81b3fb96", 00:11:31.560 "strip_size_kb": 64, 00:11:31.560 "state": "configuring", 00:11:31.560 "raid_level": "raid0", 00:11:31.560 "superblock": true, 00:11:31.560 "num_base_bdevs": 4, 00:11:31.560 "num_base_bdevs_discovered": 3, 00:11:31.560 "num_base_bdevs_operational": 4, 00:11:31.560 "base_bdevs_list": [ 00:11:31.560 { 00:11:31.560 "name": "BaseBdev1", 00:11:31.560 "uuid": "724c6591-7cce-42a4-83b8-7cd9817b6e5e", 00:11:31.560 "is_configured": true, 00:11:31.560 "data_offset": 2048, 00:11:31.560 "data_size": 63488 00:11:31.560 }, 00:11:31.560 { 00:11:31.560 "name": null, 00:11:31.560 "uuid": "e3b2295f-dfbb-4488-840f-f9dc29893546", 00:11:31.560 "is_configured": false, 00:11:31.560 "data_offset": 0, 00:11:31.560 "data_size": 63488 00:11:31.560 }, 00:11:31.560 { 00:11:31.560 "name": "BaseBdev3", 00:11:31.560 "uuid": "514f9e52-3290-4d28-8b76-61e37a35a4d7", 00:11:31.560 "is_configured": true, 00:11:31.560 "data_offset": 2048, 00:11:31.560 "data_size": 63488 00:11:31.560 }, 00:11:31.560 { 00:11:31.560 "name": "BaseBdev4", 00:11:31.560 "uuid": "346507db-db54-4d4e-87e1-f56dc4812daf", 00:11:31.560 "is_configured": true, 00:11:31.560 "data_offset": 2048, 00:11:31.560 "data_size": 63488 00:11:31.560 } 00:11:31.560 ] 00:11:31.560 }' 00:11:31.560 19:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:31.560 19:09:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.131 19:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:32.131 19:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:32.131 19:09:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.131 19:09:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.131 19:09:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.131 19:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:32.131 19:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:32.131 19:09:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.131 19:09:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.131 [2024-11-27 19:09:41.574433] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:32.131 19:09:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.131 19:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:32.131 19:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:32.131 19:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:32.131 19:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:32.131 19:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:32.131 19:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:32.131 19:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:32.131 19:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:32.131 19:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:32.131 19:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:32.131 19:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:32.131 19:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:32.131 19:09:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.131 19:09:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.131 19:09:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.131 19:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:32.131 "name": "Existed_Raid", 00:11:32.131 "uuid": "dcd96668-7a7c-484f-8915-67ff81b3fb96", 00:11:32.131 "strip_size_kb": 64, 00:11:32.131 "state": "configuring", 00:11:32.131 "raid_level": "raid0", 00:11:32.131 "superblock": true, 00:11:32.131 "num_base_bdevs": 4, 00:11:32.131 "num_base_bdevs_discovered": 2, 00:11:32.131 "num_base_bdevs_operational": 4, 00:11:32.131 "base_bdevs_list": [ 00:11:32.131 { 00:11:32.131 "name": "BaseBdev1", 00:11:32.131 "uuid": "724c6591-7cce-42a4-83b8-7cd9817b6e5e", 00:11:32.131 "is_configured": true, 00:11:32.131 "data_offset": 2048, 00:11:32.131 "data_size": 63488 00:11:32.131 }, 00:11:32.131 { 00:11:32.131 "name": null, 00:11:32.131 "uuid": "e3b2295f-dfbb-4488-840f-f9dc29893546", 00:11:32.131 "is_configured": false, 00:11:32.131 "data_offset": 0, 00:11:32.131 "data_size": 63488 00:11:32.131 }, 00:11:32.131 { 00:11:32.131 "name": null, 00:11:32.131 "uuid": "514f9e52-3290-4d28-8b76-61e37a35a4d7", 00:11:32.131 "is_configured": false, 00:11:32.131 "data_offset": 0, 00:11:32.131 "data_size": 63488 00:11:32.131 }, 00:11:32.131 { 00:11:32.131 "name": "BaseBdev4", 00:11:32.131 "uuid": "346507db-db54-4d4e-87e1-f56dc4812daf", 00:11:32.131 "is_configured": true, 00:11:32.131 "data_offset": 2048, 00:11:32.131 "data_size": 63488 00:11:32.131 } 00:11:32.131 ] 00:11:32.131 }' 00:11:32.131 19:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:32.131 19:09:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.390 19:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:32.390 19:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:32.390 19:09:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.390 19:09:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.390 19:09:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.650 19:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:32.650 19:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:32.650 19:09:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.650 19:09:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.650 [2024-11-27 19:09:42.045585] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:32.650 19:09:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.650 19:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:32.650 19:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:32.650 19:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:32.650 19:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:32.650 19:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:32.650 19:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:32.650 19:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:32.650 19:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:32.650 19:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:32.650 19:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:32.650 19:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:32.650 19:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:32.650 19:09:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.650 19:09:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.650 19:09:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.650 19:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:32.650 "name": "Existed_Raid", 00:11:32.650 "uuid": "dcd96668-7a7c-484f-8915-67ff81b3fb96", 00:11:32.650 "strip_size_kb": 64, 00:11:32.650 "state": "configuring", 00:11:32.650 "raid_level": "raid0", 00:11:32.650 "superblock": true, 00:11:32.650 "num_base_bdevs": 4, 00:11:32.650 "num_base_bdevs_discovered": 3, 00:11:32.650 "num_base_bdevs_operational": 4, 00:11:32.650 "base_bdevs_list": [ 00:11:32.650 { 00:11:32.650 "name": "BaseBdev1", 00:11:32.650 "uuid": "724c6591-7cce-42a4-83b8-7cd9817b6e5e", 00:11:32.650 "is_configured": true, 00:11:32.650 "data_offset": 2048, 00:11:32.650 "data_size": 63488 00:11:32.650 }, 00:11:32.650 { 00:11:32.650 "name": null, 00:11:32.650 "uuid": "e3b2295f-dfbb-4488-840f-f9dc29893546", 00:11:32.650 "is_configured": false, 00:11:32.650 "data_offset": 0, 00:11:32.650 "data_size": 63488 00:11:32.650 }, 00:11:32.650 { 00:11:32.650 "name": "BaseBdev3", 00:11:32.650 "uuid": "514f9e52-3290-4d28-8b76-61e37a35a4d7", 00:11:32.650 "is_configured": true, 00:11:32.650 "data_offset": 2048, 00:11:32.650 "data_size": 63488 00:11:32.650 }, 00:11:32.650 { 00:11:32.650 "name": "BaseBdev4", 00:11:32.650 "uuid": "346507db-db54-4d4e-87e1-f56dc4812daf", 00:11:32.650 "is_configured": true, 00:11:32.650 "data_offset": 2048, 00:11:32.650 "data_size": 63488 00:11:32.650 } 00:11:32.650 ] 00:11:32.650 }' 00:11:32.650 19:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:32.650 19:09:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.909 19:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:32.909 19:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:32.909 19:09:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.909 19:09:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.909 19:09:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.169 19:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:33.169 19:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:33.169 19:09:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.169 19:09:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.169 [2024-11-27 19:09:42.568769] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:33.169 19:09:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.169 19:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:33.169 19:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:33.169 19:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:33.169 19:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:33.169 19:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:33.169 19:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:33.169 19:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:33.169 19:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:33.169 19:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:33.169 19:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:33.169 19:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:33.169 19:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:33.169 19:09:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.169 19:09:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.169 19:09:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.169 19:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:33.169 "name": "Existed_Raid", 00:11:33.169 "uuid": "dcd96668-7a7c-484f-8915-67ff81b3fb96", 00:11:33.169 "strip_size_kb": 64, 00:11:33.169 "state": "configuring", 00:11:33.169 "raid_level": "raid0", 00:11:33.169 "superblock": true, 00:11:33.169 "num_base_bdevs": 4, 00:11:33.169 "num_base_bdevs_discovered": 2, 00:11:33.169 "num_base_bdevs_operational": 4, 00:11:33.169 "base_bdevs_list": [ 00:11:33.169 { 00:11:33.169 "name": null, 00:11:33.169 "uuid": "724c6591-7cce-42a4-83b8-7cd9817b6e5e", 00:11:33.169 "is_configured": false, 00:11:33.169 "data_offset": 0, 00:11:33.169 "data_size": 63488 00:11:33.169 }, 00:11:33.169 { 00:11:33.169 "name": null, 00:11:33.169 "uuid": "e3b2295f-dfbb-4488-840f-f9dc29893546", 00:11:33.169 "is_configured": false, 00:11:33.169 "data_offset": 0, 00:11:33.169 "data_size": 63488 00:11:33.169 }, 00:11:33.169 { 00:11:33.169 "name": "BaseBdev3", 00:11:33.169 "uuid": "514f9e52-3290-4d28-8b76-61e37a35a4d7", 00:11:33.169 "is_configured": true, 00:11:33.169 "data_offset": 2048, 00:11:33.169 "data_size": 63488 00:11:33.169 }, 00:11:33.169 { 00:11:33.169 "name": "BaseBdev4", 00:11:33.169 "uuid": "346507db-db54-4d4e-87e1-f56dc4812daf", 00:11:33.169 "is_configured": true, 00:11:33.169 "data_offset": 2048, 00:11:33.169 "data_size": 63488 00:11:33.169 } 00:11:33.169 ] 00:11:33.169 }' 00:11:33.169 19:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:33.169 19:09:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.738 19:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:33.738 19:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:33.738 19:09:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.738 19:09:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.738 19:09:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.738 19:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:33.738 19:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:33.738 19:09:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.738 19:09:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.738 [2024-11-27 19:09:43.207065] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:33.738 19:09:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.738 19:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:33.738 19:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:33.738 19:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:33.738 19:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:33.738 19:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:33.738 19:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:33.738 19:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:33.738 19:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:33.738 19:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:33.738 19:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:33.738 19:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:33.738 19:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:33.738 19:09:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.738 19:09:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.738 19:09:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.738 19:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:33.738 "name": "Existed_Raid", 00:11:33.739 "uuid": "dcd96668-7a7c-484f-8915-67ff81b3fb96", 00:11:33.739 "strip_size_kb": 64, 00:11:33.739 "state": "configuring", 00:11:33.739 "raid_level": "raid0", 00:11:33.739 "superblock": true, 00:11:33.739 "num_base_bdevs": 4, 00:11:33.739 "num_base_bdevs_discovered": 3, 00:11:33.739 "num_base_bdevs_operational": 4, 00:11:33.739 "base_bdevs_list": [ 00:11:33.739 { 00:11:33.739 "name": null, 00:11:33.739 "uuid": "724c6591-7cce-42a4-83b8-7cd9817b6e5e", 00:11:33.739 "is_configured": false, 00:11:33.739 "data_offset": 0, 00:11:33.739 "data_size": 63488 00:11:33.739 }, 00:11:33.739 { 00:11:33.739 "name": "BaseBdev2", 00:11:33.739 "uuid": "e3b2295f-dfbb-4488-840f-f9dc29893546", 00:11:33.739 "is_configured": true, 00:11:33.739 "data_offset": 2048, 00:11:33.739 "data_size": 63488 00:11:33.739 }, 00:11:33.739 { 00:11:33.739 "name": "BaseBdev3", 00:11:33.739 "uuid": "514f9e52-3290-4d28-8b76-61e37a35a4d7", 00:11:33.739 "is_configured": true, 00:11:33.739 "data_offset": 2048, 00:11:33.739 "data_size": 63488 00:11:33.739 }, 00:11:33.739 { 00:11:33.739 "name": "BaseBdev4", 00:11:33.739 "uuid": "346507db-db54-4d4e-87e1-f56dc4812daf", 00:11:33.739 "is_configured": true, 00:11:33.739 "data_offset": 2048, 00:11:33.739 "data_size": 63488 00:11:33.739 } 00:11:33.739 ] 00:11:33.739 }' 00:11:33.739 19:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:33.739 19:09:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.998 19:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:33.998 19:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:33.998 19:09:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.998 19:09:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.259 19:09:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.259 19:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:34.259 19:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:34.259 19:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:34.259 19:09:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.259 19:09:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.259 19:09:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.259 19:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 724c6591-7cce-42a4-83b8-7cd9817b6e5e 00:11:34.259 19:09:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.259 19:09:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.259 [2024-11-27 19:09:43.772676] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:34.259 [2024-11-27 19:09:43.772981] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:34.259 [2024-11-27 19:09:43.772995] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:34.259 [2024-11-27 19:09:43.773306] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:34.259 [2024-11-27 19:09:43.773483] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:34.259 [2024-11-27 19:09:43.773495] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:34.259 [2024-11-27 19:09:43.773639] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:34.259 NewBaseBdev 00:11:34.259 19:09:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.259 19:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:34.259 19:09:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:11:34.259 19:09:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:34.259 19:09:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:34.259 19:09:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:34.259 19:09:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:34.259 19:09:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:34.259 19:09:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.259 19:09:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.259 19:09:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.259 19:09:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:34.259 19:09:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.259 19:09:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.259 [ 00:11:34.259 { 00:11:34.259 "name": "NewBaseBdev", 00:11:34.259 "aliases": [ 00:11:34.259 "724c6591-7cce-42a4-83b8-7cd9817b6e5e" 00:11:34.259 ], 00:11:34.259 "product_name": "Malloc disk", 00:11:34.259 "block_size": 512, 00:11:34.259 "num_blocks": 65536, 00:11:34.259 "uuid": "724c6591-7cce-42a4-83b8-7cd9817b6e5e", 00:11:34.259 "assigned_rate_limits": { 00:11:34.259 "rw_ios_per_sec": 0, 00:11:34.259 "rw_mbytes_per_sec": 0, 00:11:34.259 "r_mbytes_per_sec": 0, 00:11:34.259 "w_mbytes_per_sec": 0 00:11:34.259 }, 00:11:34.259 "claimed": true, 00:11:34.259 "claim_type": "exclusive_write", 00:11:34.259 "zoned": false, 00:11:34.259 "supported_io_types": { 00:11:34.259 "read": true, 00:11:34.259 "write": true, 00:11:34.259 "unmap": true, 00:11:34.259 "flush": true, 00:11:34.259 "reset": true, 00:11:34.259 "nvme_admin": false, 00:11:34.259 "nvme_io": false, 00:11:34.259 "nvme_io_md": false, 00:11:34.259 "write_zeroes": true, 00:11:34.259 "zcopy": true, 00:11:34.259 "get_zone_info": false, 00:11:34.259 "zone_management": false, 00:11:34.259 "zone_append": false, 00:11:34.259 "compare": false, 00:11:34.259 "compare_and_write": false, 00:11:34.259 "abort": true, 00:11:34.259 "seek_hole": false, 00:11:34.259 "seek_data": false, 00:11:34.259 "copy": true, 00:11:34.259 "nvme_iov_md": false 00:11:34.259 }, 00:11:34.259 "memory_domains": [ 00:11:34.259 { 00:11:34.259 "dma_device_id": "system", 00:11:34.259 "dma_device_type": 1 00:11:34.259 }, 00:11:34.259 { 00:11:34.259 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:34.259 "dma_device_type": 2 00:11:34.259 } 00:11:34.259 ], 00:11:34.259 "driver_specific": {} 00:11:34.259 } 00:11:34.259 ] 00:11:34.259 19:09:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.259 19:09:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:34.259 19:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:11:34.259 19:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:34.259 19:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:34.259 19:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:34.259 19:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:34.259 19:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:34.259 19:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:34.259 19:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:34.259 19:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:34.259 19:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:34.259 19:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:34.259 19:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:34.259 19:09:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.259 19:09:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.259 19:09:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.259 19:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:34.259 "name": "Existed_Raid", 00:11:34.259 "uuid": "dcd96668-7a7c-484f-8915-67ff81b3fb96", 00:11:34.259 "strip_size_kb": 64, 00:11:34.259 "state": "online", 00:11:34.259 "raid_level": "raid0", 00:11:34.259 "superblock": true, 00:11:34.259 "num_base_bdevs": 4, 00:11:34.259 "num_base_bdevs_discovered": 4, 00:11:34.259 "num_base_bdevs_operational": 4, 00:11:34.259 "base_bdevs_list": [ 00:11:34.259 { 00:11:34.259 "name": "NewBaseBdev", 00:11:34.259 "uuid": "724c6591-7cce-42a4-83b8-7cd9817b6e5e", 00:11:34.259 "is_configured": true, 00:11:34.259 "data_offset": 2048, 00:11:34.259 "data_size": 63488 00:11:34.259 }, 00:11:34.259 { 00:11:34.259 "name": "BaseBdev2", 00:11:34.259 "uuid": "e3b2295f-dfbb-4488-840f-f9dc29893546", 00:11:34.259 "is_configured": true, 00:11:34.259 "data_offset": 2048, 00:11:34.259 "data_size": 63488 00:11:34.259 }, 00:11:34.259 { 00:11:34.259 "name": "BaseBdev3", 00:11:34.259 "uuid": "514f9e52-3290-4d28-8b76-61e37a35a4d7", 00:11:34.259 "is_configured": true, 00:11:34.259 "data_offset": 2048, 00:11:34.259 "data_size": 63488 00:11:34.259 }, 00:11:34.259 { 00:11:34.259 "name": "BaseBdev4", 00:11:34.259 "uuid": "346507db-db54-4d4e-87e1-f56dc4812daf", 00:11:34.259 "is_configured": true, 00:11:34.259 "data_offset": 2048, 00:11:34.259 "data_size": 63488 00:11:34.259 } 00:11:34.259 ] 00:11:34.259 }' 00:11:34.259 19:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:34.259 19:09:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.828 19:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:34.828 19:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:34.828 19:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:34.828 19:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:34.828 19:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:34.828 19:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:34.828 19:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:34.829 19:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.829 19:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.829 19:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:34.829 [2024-11-27 19:09:44.292267] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:34.829 19:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.829 19:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:34.829 "name": "Existed_Raid", 00:11:34.829 "aliases": [ 00:11:34.829 "dcd96668-7a7c-484f-8915-67ff81b3fb96" 00:11:34.829 ], 00:11:34.829 "product_name": "Raid Volume", 00:11:34.829 "block_size": 512, 00:11:34.829 "num_blocks": 253952, 00:11:34.829 "uuid": "dcd96668-7a7c-484f-8915-67ff81b3fb96", 00:11:34.829 "assigned_rate_limits": { 00:11:34.829 "rw_ios_per_sec": 0, 00:11:34.829 "rw_mbytes_per_sec": 0, 00:11:34.829 "r_mbytes_per_sec": 0, 00:11:34.829 "w_mbytes_per_sec": 0 00:11:34.829 }, 00:11:34.829 "claimed": false, 00:11:34.829 "zoned": false, 00:11:34.829 "supported_io_types": { 00:11:34.829 "read": true, 00:11:34.829 "write": true, 00:11:34.829 "unmap": true, 00:11:34.829 "flush": true, 00:11:34.829 "reset": true, 00:11:34.829 "nvme_admin": false, 00:11:34.829 "nvme_io": false, 00:11:34.829 "nvme_io_md": false, 00:11:34.829 "write_zeroes": true, 00:11:34.829 "zcopy": false, 00:11:34.829 "get_zone_info": false, 00:11:34.829 "zone_management": false, 00:11:34.829 "zone_append": false, 00:11:34.829 "compare": false, 00:11:34.829 "compare_and_write": false, 00:11:34.829 "abort": false, 00:11:34.829 "seek_hole": false, 00:11:34.829 "seek_data": false, 00:11:34.829 "copy": false, 00:11:34.829 "nvme_iov_md": false 00:11:34.829 }, 00:11:34.829 "memory_domains": [ 00:11:34.829 { 00:11:34.829 "dma_device_id": "system", 00:11:34.829 "dma_device_type": 1 00:11:34.829 }, 00:11:34.829 { 00:11:34.829 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:34.829 "dma_device_type": 2 00:11:34.829 }, 00:11:34.829 { 00:11:34.829 "dma_device_id": "system", 00:11:34.829 "dma_device_type": 1 00:11:34.829 }, 00:11:34.829 { 00:11:34.829 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:34.829 "dma_device_type": 2 00:11:34.829 }, 00:11:34.829 { 00:11:34.829 "dma_device_id": "system", 00:11:34.829 "dma_device_type": 1 00:11:34.829 }, 00:11:34.829 { 00:11:34.829 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:34.829 "dma_device_type": 2 00:11:34.829 }, 00:11:34.829 { 00:11:34.829 "dma_device_id": "system", 00:11:34.829 "dma_device_type": 1 00:11:34.829 }, 00:11:34.829 { 00:11:34.829 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:34.829 "dma_device_type": 2 00:11:34.829 } 00:11:34.829 ], 00:11:34.829 "driver_specific": { 00:11:34.829 "raid": { 00:11:34.829 "uuid": "dcd96668-7a7c-484f-8915-67ff81b3fb96", 00:11:34.829 "strip_size_kb": 64, 00:11:34.829 "state": "online", 00:11:34.829 "raid_level": "raid0", 00:11:34.829 "superblock": true, 00:11:34.829 "num_base_bdevs": 4, 00:11:34.829 "num_base_bdevs_discovered": 4, 00:11:34.829 "num_base_bdevs_operational": 4, 00:11:34.829 "base_bdevs_list": [ 00:11:34.829 { 00:11:34.829 "name": "NewBaseBdev", 00:11:34.829 "uuid": "724c6591-7cce-42a4-83b8-7cd9817b6e5e", 00:11:34.829 "is_configured": true, 00:11:34.829 "data_offset": 2048, 00:11:34.829 "data_size": 63488 00:11:34.829 }, 00:11:34.829 { 00:11:34.829 "name": "BaseBdev2", 00:11:34.829 "uuid": "e3b2295f-dfbb-4488-840f-f9dc29893546", 00:11:34.829 "is_configured": true, 00:11:34.829 "data_offset": 2048, 00:11:34.829 "data_size": 63488 00:11:34.829 }, 00:11:34.829 { 00:11:34.829 "name": "BaseBdev3", 00:11:34.829 "uuid": "514f9e52-3290-4d28-8b76-61e37a35a4d7", 00:11:34.829 "is_configured": true, 00:11:34.829 "data_offset": 2048, 00:11:34.829 "data_size": 63488 00:11:34.829 }, 00:11:34.829 { 00:11:34.829 "name": "BaseBdev4", 00:11:34.829 "uuid": "346507db-db54-4d4e-87e1-f56dc4812daf", 00:11:34.829 "is_configured": true, 00:11:34.829 "data_offset": 2048, 00:11:34.829 "data_size": 63488 00:11:34.829 } 00:11:34.829 ] 00:11:34.829 } 00:11:34.829 } 00:11:34.829 }' 00:11:34.829 19:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:34.829 19:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:34.829 BaseBdev2 00:11:34.829 BaseBdev3 00:11:34.829 BaseBdev4' 00:11:34.829 19:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:34.829 19:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:34.829 19:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:34.829 19:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:34.829 19:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:34.829 19:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.829 19:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.829 19:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.089 19:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:35.089 19:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:35.089 19:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:35.089 19:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:35.089 19:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:35.089 19:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.089 19:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.089 19:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.089 19:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:35.089 19:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:35.089 19:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:35.089 19:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:35.089 19:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:35.089 19:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.089 19:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.089 19:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.089 19:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:35.089 19:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:35.089 19:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:35.089 19:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:35.089 19:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:35.089 19:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.089 19:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.089 19:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.089 19:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:35.089 19:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:35.089 19:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:35.089 19:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.090 19:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.090 [2024-11-27 19:09:44.603301] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:35.090 [2024-11-27 19:09:44.603381] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:35.090 [2024-11-27 19:09:44.603488] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:35.090 [2024-11-27 19:09:44.603585] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:35.090 [2024-11-27 19:09:44.603633] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:35.090 19:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.090 19:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 70153 00:11:35.090 19:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 70153 ']' 00:11:35.090 19:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 70153 00:11:35.090 19:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:11:35.090 19:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:35.090 19:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70153 00:11:35.090 killing process with pid 70153 00:11:35.090 19:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:35.090 19:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:35.090 19:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70153' 00:11:35.090 19:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 70153 00:11:35.090 [2024-11-27 19:09:44.652451] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:35.090 19:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 70153 00:11:35.660 [2024-11-27 19:09:45.088866] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:37.042 19:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:11:37.042 00:11:37.042 real 0m11.888s 00:11:37.042 user 0m18.559s 00:11:37.042 sys 0m2.308s 00:11:37.042 ************************************ 00:11:37.042 END TEST raid_state_function_test_sb 00:11:37.042 ************************************ 00:11:37.042 19:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:37.042 19:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.042 19:09:46 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:11:37.042 19:09:46 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:37.042 19:09:46 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:37.042 19:09:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:37.042 ************************************ 00:11:37.042 START TEST raid_superblock_test 00:11:37.042 ************************************ 00:11:37.042 19:09:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 4 00:11:37.042 19:09:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:11:37.042 19:09:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:11:37.042 19:09:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:11:37.042 19:09:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:11:37.042 19:09:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:11:37.042 19:09:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:11:37.042 19:09:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:11:37.042 19:09:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:11:37.042 19:09:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:11:37.042 19:09:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:11:37.042 19:09:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:11:37.042 19:09:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:11:37.042 19:09:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:11:37.042 19:09:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:11:37.042 19:09:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:11:37.042 19:09:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:11:37.042 19:09:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=70823 00:11:37.042 19:09:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:11:37.042 19:09:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 70823 00:11:37.042 19:09:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 70823 ']' 00:11:37.042 19:09:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:37.043 19:09:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:37.043 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:37.043 19:09:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:37.043 19:09:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:37.043 19:09:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.043 [2024-11-27 19:09:46.491933] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:11:37.043 [2024-11-27 19:09:46.492069] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70823 ] 00:11:37.043 [2024-11-27 19:09:46.669423] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:37.302 [2024-11-27 19:09:46.805196] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:37.560 [2024-11-27 19:09:47.044878] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:37.560 [2024-11-27 19:09:47.044935] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:37.819 19:09:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:37.819 19:09:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:11:37.820 19:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:11:37.820 19:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:37.820 19:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:11:37.820 19:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:11:37.820 19:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:11:37.820 19:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:37.820 19:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:37.820 19:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:37.820 19:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:11:37.820 19:09:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.820 19:09:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.820 malloc1 00:11:37.820 19:09:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.820 19:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:37.820 19:09:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.820 19:09:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.820 [2024-11-27 19:09:47.376585] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:37.820 [2024-11-27 19:09:47.376715] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:37.820 [2024-11-27 19:09:47.376759] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:37.820 [2024-11-27 19:09:47.376788] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:37.820 [2024-11-27 19:09:47.379229] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:37.820 [2024-11-27 19:09:47.379301] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:37.820 pt1 00:11:37.820 19:09:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.820 19:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:37.820 19:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:37.820 19:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:11:37.820 19:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:11:37.820 19:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:11:37.820 19:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:37.820 19:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:37.820 19:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:37.820 19:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:11:37.820 19:09:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.820 19:09:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.820 malloc2 00:11:37.820 19:09:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.820 19:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:37.820 19:09:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.820 19:09:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.820 [2024-11-27 19:09:47.441852] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:37.820 [2024-11-27 19:09:47.441977] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:37.820 [2024-11-27 19:09:47.442011] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:37.820 [2024-11-27 19:09:47.442020] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:37.820 [2024-11-27 19:09:47.444439] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:37.820 [2024-11-27 19:09:47.444477] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:37.820 pt2 00:11:37.820 19:09:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.820 19:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:37.820 19:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:37.820 19:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:11:37.820 19:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:11:37.820 19:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:11:37.820 19:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:37.820 19:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:37.820 19:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:37.820 19:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:11:37.820 19:09:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.820 19:09:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.080 malloc3 00:11:38.080 19:09:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.080 19:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:38.080 19:09:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.080 19:09:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.080 [2024-11-27 19:09:47.525146] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:38.080 [2024-11-27 19:09:47.525248] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:38.080 [2024-11-27 19:09:47.525288] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:11:38.080 [2024-11-27 19:09:47.525315] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:38.080 [2024-11-27 19:09:47.527769] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:38.080 [2024-11-27 19:09:47.527839] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:38.080 pt3 00:11:38.080 19:09:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.080 19:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:38.080 19:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:38.080 19:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:11:38.080 19:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:11:38.080 19:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:11:38.080 19:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:38.080 19:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:38.080 19:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:38.080 19:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:11:38.080 19:09:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.080 19:09:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.080 malloc4 00:11:38.080 19:09:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.080 19:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:38.080 19:09:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.080 19:09:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.080 [2024-11-27 19:09:47.590145] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:38.080 [2024-11-27 19:09:47.590245] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:38.080 [2024-11-27 19:09:47.590286] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:11:38.080 [2024-11-27 19:09:47.590314] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:38.080 [2024-11-27 19:09:47.592681] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:38.080 [2024-11-27 19:09:47.592763] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:38.080 pt4 00:11:38.080 19:09:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.080 19:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:38.080 19:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:38.080 19:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:11:38.080 19:09:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.080 19:09:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.080 [2024-11-27 19:09:47.602160] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:38.080 [2024-11-27 19:09:47.604264] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:38.080 [2024-11-27 19:09:47.604401] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:38.080 [2024-11-27 19:09:47.604474] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:38.080 [2024-11-27 19:09:47.604671] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:11:38.080 [2024-11-27 19:09:47.604731] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:38.080 [2024-11-27 19:09:47.605005] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:38.080 [2024-11-27 19:09:47.605216] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:11:38.080 [2024-11-27 19:09:47.605261] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:11:38.080 [2024-11-27 19:09:47.605440] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:38.080 19:09:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.080 19:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:11:38.080 19:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:38.080 19:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:38.080 19:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:38.080 19:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:38.080 19:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:38.080 19:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:38.080 19:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:38.080 19:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:38.080 19:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:38.080 19:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:38.080 19:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:38.080 19:09:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.080 19:09:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.080 19:09:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.080 19:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:38.080 "name": "raid_bdev1", 00:11:38.080 "uuid": "73bafa63-30ec-434e-b9c1-ba3ca5919cf9", 00:11:38.080 "strip_size_kb": 64, 00:11:38.080 "state": "online", 00:11:38.080 "raid_level": "raid0", 00:11:38.080 "superblock": true, 00:11:38.080 "num_base_bdevs": 4, 00:11:38.080 "num_base_bdevs_discovered": 4, 00:11:38.080 "num_base_bdevs_operational": 4, 00:11:38.080 "base_bdevs_list": [ 00:11:38.080 { 00:11:38.080 "name": "pt1", 00:11:38.080 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:38.080 "is_configured": true, 00:11:38.080 "data_offset": 2048, 00:11:38.080 "data_size": 63488 00:11:38.080 }, 00:11:38.080 { 00:11:38.080 "name": "pt2", 00:11:38.080 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:38.080 "is_configured": true, 00:11:38.080 "data_offset": 2048, 00:11:38.080 "data_size": 63488 00:11:38.080 }, 00:11:38.080 { 00:11:38.080 "name": "pt3", 00:11:38.080 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:38.080 "is_configured": true, 00:11:38.080 "data_offset": 2048, 00:11:38.080 "data_size": 63488 00:11:38.080 }, 00:11:38.080 { 00:11:38.080 "name": "pt4", 00:11:38.080 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:38.080 "is_configured": true, 00:11:38.080 "data_offset": 2048, 00:11:38.080 "data_size": 63488 00:11:38.080 } 00:11:38.080 ] 00:11:38.080 }' 00:11:38.080 19:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:38.080 19:09:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.648 19:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:11:38.648 19:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:38.648 19:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:38.648 19:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:38.648 19:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:38.648 19:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:38.648 19:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:38.648 19:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:38.648 19:09:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.648 19:09:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.648 [2024-11-27 19:09:48.061696] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:38.648 19:09:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.648 19:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:38.648 "name": "raid_bdev1", 00:11:38.648 "aliases": [ 00:11:38.648 "73bafa63-30ec-434e-b9c1-ba3ca5919cf9" 00:11:38.648 ], 00:11:38.648 "product_name": "Raid Volume", 00:11:38.648 "block_size": 512, 00:11:38.648 "num_blocks": 253952, 00:11:38.648 "uuid": "73bafa63-30ec-434e-b9c1-ba3ca5919cf9", 00:11:38.648 "assigned_rate_limits": { 00:11:38.648 "rw_ios_per_sec": 0, 00:11:38.648 "rw_mbytes_per_sec": 0, 00:11:38.648 "r_mbytes_per_sec": 0, 00:11:38.648 "w_mbytes_per_sec": 0 00:11:38.648 }, 00:11:38.648 "claimed": false, 00:11:38.648 "zoned": false, 00:11:38.648 "supported_io_types": { 00:11:38.648 "read": true, 00:11:38.648 "write": true, 00:11:38.648 "unmap": true, 00:11:38.648 "flush": true, 00:11:38.648 "reset": true, 00:11:38.648 "nvme_admin": false, 00:11:38.648 "nvme_io": false, 00:11:38.648 "nvme_io_md": false, 00:11:38.648 "write_zeroes": true, 00:11:38.648 "zcopy": false, 00:11:38.648 "get_zone_info": false, 00:11:38.648 "zone_management": false, 00:11:38.648 "zone_append": false, 00:11:38.648 "compare": false, 00:11:38.648 "compare_and_write": false, 00:11:38.648 "abort": false, 00:11:38.649 "seek_hole": false, 00:11:38.649 "seek_data": false, 00:11:38.649 "copy": false, 00:11:38.649 "nvme_iov_md": false 00:11:38.649 }, 00:11:38.649 "memory_domains": [ 00:11:38.649 { 00:11:38.649 "dma_device_id": "system", 00:11:38.649 "dma_device_type": 1 00:11:38.649 }, 00:11:38.649 { 00:11:38.649 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:38.649 "dma_device_type": 2 00:11:38.649 }, 00:11:38.649 { 00:11:38.649 "dma_device_id": "system", 00:11:38.649 "dma_device_type": 1 00:11:38.649 }, 00:11:38.649 { 00:11:38.649 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:38.649 "dma_device_type": 2 00:11:38.649 }, 00:11:38.649 { 00:11:38.649 "dma_device_id": "system", 00:11:38.649 "dma_device_type": 1 00:11:38.649 }, 00:11:38.649 { 00:11:38.649 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:38.649 "dma_device_type": 2 00:11:38.649 }, 00:11:38.649 { 00:11:38.649 "dma_device_id": "system", 00:11:38.649 "dma_device_type": 1 00:11:38.649 }, 00:11:38.649 { 00:11:38.649 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:38.649 "dma_device_type": 2 00:11:38.649 } 00:11:38.649 ], 00:11:38.649 "driver_specific": { 00:11:38.649 "raid": { 00:11:38.649 "uuid": "73bafa63-30ec-434e-b9c1-ba3ca5919cf9", 00:11:38.649 "strip_size_kb": 64, 00:11:38.649 "state": "online", 00:11:38.649 "raid_level": "raid0", 00:11:38.649 "superblock": true, 00:11:38.649 "num_base_bdevs": 4, 00:11:38.649 "num_base_bdevs_discovered": 4, 00:11:38.649 "num_base_bdevs_operational": 4, 00:11:38.649 "base_bdevs_list": [ 00:11:38.649 { 00:11:38.649 "name": "pt1", 00:11:38.649 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:38.649 "is_configured": true, 00:11:38.649 "data_offset": 2048, 00:11:38.649 "data_size": 63488 00:11:38.649 }, 00:11:38.649 { 00:11:38.649 "name": "pt2", 00:11:38.649 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:38.649 "is_configured": true, 00:11:38.649 "data_offset": 2048, 00:11:38.649 "data_size": 63488 00:11:38.649 }, 00:11:38.649 { 00:11:38.649 "name": "pt3", 00:11:38.649 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:38.649 "is_configured": true, 00:11:38.649 "data_offset": 2048, 00:11:38.649 "data_size": 63488 00:11:38.649 }, 00:11:38.649 { 00:11:38.649 "name": "pt4", 00:11:38.649 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:38.649 "is_configured": true, 00:11:38.649 "data_offset": 2048, 00:11:38.649 "data_size": 63488 00:11:38.649 } 00:11:38.649 ] 00:11:38.649 } 00:11:38.649 } 00:11:38.649 }' 00:11:38.649 19:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:38.649 19:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:38.649 pt2 00:11:38.649 pt3 00:11:38.649 pt4' 00:11:38.649 19:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:38.649 19:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:38.649 19:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:38.649 19:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:38.649 19:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:38.649 19:09:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.649 19:09:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.649 19:09:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.649 19:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:38.649 19:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:38.649 19:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:38.649 19:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:38.649 19:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:38.649 19:09:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.649 19:09:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.649 19:09:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.649 19:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:38.649 19:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:38.649 19:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:38.649 19:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:38.649 19:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:38.649 19:09:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.649 19:09:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.909 19:09:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.909 19:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:38.909 19:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:38.909 19:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:38.909 19:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:11:38.909 19:09:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.909 19:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:38.909 19:09:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.909 19:09:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.909 19:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:38.909 19:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:38.909 19:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:11:38.909 19:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:38.909 19:09:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.909 19:09:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.909 [2024-11-27 19:09:48.389070] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:38.909 19:09:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.909 19:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=73bafa63-30ec-434e-b9c1-ba3ca5919cf9 00:11:38.909 19:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 73bafa63-30ec-434e-b9c1-ba3ca5919cf9 ']' 00:11:38.909 19:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:38.909 19:09:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.909 19:09:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.909 [2024-11-27 19:09:48.432681] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:38.909 [2024-11-27 19:09:48.432717] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:38.909 [2024-11-27 19:09:48.432807] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:38.909 [2024-11-27 19:09:48.432883] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:38.909 [2024-11-27 19:09:48.432899] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:11:38.909 19:09:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.909 19:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:38.909 19:09:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.909 19:09:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.909 19:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:11:38.909 19:09:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.909 19:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:11:38.909 19:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:11:38.909 19:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:38.909 19:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:11:38.909 19:09:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.909 19:09:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.909 19:09:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.909 19:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:38.909 19:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:11:38.909 19:09:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.909 19:09:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.909 19:09:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.909 19:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:38.909 19:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:11:38.909 19:09:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.909 19:09:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.909 19:09:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.909 19:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:38.909 19:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:11:38.909 19:09:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.909 19:09:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.909 19:09:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.909 19:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:11:38.909 19:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:11:38.909 19:09:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.909 19:09:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.169 19:09:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.169 19:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:11:39.169 19:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:39.169 19:09:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:11:39.169 19:09:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:39.169 19:09:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:11:39.169 19:09:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:39.169 19:09:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:11:39.169 19:09:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:39.169 19:09:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:39.169 19:09:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.169 19:09:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.169 [2024-11-27 19:09:48.604408] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:11:39.169 [2024-11-27 19:09:48.606549] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:11:39.169 [2024-11-27 19:09:48.606644] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:11:39.169 [2024-11-27 19:09:48.606707] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:11:39.169 [2024-11-27 19:09:48.606801] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:11:39.169 [2024-11-27 19:09:48.606875] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:11:39.169 [2024-11-27 19:09:48.606922] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:11:39.169 [2024-11-27 19:09:48.606942] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:11:39.169 [2024-11-27 19:09:48.606955] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:39.169 [2024-11-27 19:09:48.606968] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:11:39.169 request: 00:11:39.169 { 00:11:39.169 "name": "raid_bdev1", 00:11:39.169 "raid_level": "raid0", 00:11:39.169 "base_bdevs": [ 00:11:39.169 "malloc1", 00:11:39.169 "malloc2", 00:11:39.169 "malloc3", 00:11:39.169 "malloc4" 00:11:39.169 ], 00:11:39.169 "strip_size_kb": 64, 00:11:39.169 "superblock": false, 00:11:39.169 "method": "bdev_raid_create", 00:11:39.169 "req_id": 1 00:11:39.169 } 00:11:39.169 Got JSON-RPC error response 00:11:39.169 response: 00:11:39.169 { 00:11:39.169 "code": -17, 00:11:39.170 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:11:39.170 } 00:11:39.170 19:09:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:11:39.170 19:09:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:11:39.170 19:09:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:39.170 19:09:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:39.170 19:09:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:39.170 19:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.170 19:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:11:39.170 19:09:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.170 19:09:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.170 19:09:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.170 19:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:11:39.170 19:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:11:39.170 19:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:39.170 19:09:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.170 19:09:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.170 [2024-11-27 19:09:48.672255] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:39.170 [2024-11-27 19:09:48.672307] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:39.170 [2024-11-27 19:09:48.672326] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:39.170 [2024-11-27 19:09:48.672337] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:39.170 [2024-11-27 19:09:48.674778] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:39.170 [2024-11-27 19:09:48.674816] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:39.170 [2024-11-27 19:09:48.674910] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:39.170 [2024-11-27 19:09:48.674961] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:39.170 pt1 00:11:39.170 19:09:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.170 19:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:11:39.170 19:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:39.170 19:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:39.170 19:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:39.170 19:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:39.170 19:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:39.170 19:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:39.170 19:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:39.170 19:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:39.170 19:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:39.170 19:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.170 19:09:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.170 19:09:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.170 19:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:39.170 19:09:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.170 19:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:39.170 "name": "raid_bdev1", 00:11:39.170 "uuid": "73bafa63-30ec-434e-b9c1-ba3ca5919cf9", 00:11:39.170 "strip_size_kb": 64, 00:11:39.170 "state": "configuring", 00:11:39.170 "raid_level": "raid0", 00:11:39.170 "superblock": true, 00:11:39.170 "num_base_bdevs": 4, 00:11:39.170 "num_base_bdevs_discovered": 1, 00:11:39.170 "num_base_bdevs_operational": 4, 00:11:39.170 "base_bdevs_list": [ 00:11:39.170 { 00:11:39.170 "name": "pt1", 00:11:39.170 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:39.170 "is_configured": true, 00:11:39.170 "data_offset": 2048, 00:11:39.170 "data_size": 63488 00:11:39.170 }, 00:11:39.170 { 00:11:39.170 "name": null, 00:11:39.170 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:39.170 "is_configured": false, 00:11:39.170 "data_offset": 2048, 00:11:39.170 "data_size": 63488 00:11:39.170 }, 00:11:39.170 { 00:11:39.170 "name": null, 00:11:39.170 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:39.170 "is_configured": false, 00:11:39.170 "data_offset": 2048, 00:11:39.170 "data_size": 63488 00:11:39.170 }, 00:11:39.170 { 00:11:39.170 "name": null, 00:11:39.170 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:39.170 "is_configured": false, 00:11:39.170 "data_offset": 2048, 00:11:39.170 "data_size": 63488 00:11:39.170 } 00:11:39.170 ] 00:11:39.170 }' 00:11:39.170 19:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:39.170 19:09:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.742 19:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:11:39.742 19:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:39.742 19:09:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.742 19:09:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.742 [2024-11-27 19:09:49.087607] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:39.742 [2024-11-27 19:09:49.087734] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:39.742 [2024-11-27 19:09:49.087774] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:11:39.742 [2024-11-27 19:09:49.087805] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:39.742 [2024-11-27 19:09:49.088321] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:39.742 [2024-11-27 19:09:49.088382] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:39.742 [2024-11-27 19:09:49.088502] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:39.742 [2024-11-27 19:09:49.088555] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:39.742 pt2 00:11:39.742 19:09:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.742 19:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:11:39.742 19:09:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.742 19:09:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.742 [2024-11-27 19:09:49.099596] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:11:39.742 19:09:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.742 19:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:11:39.742 19:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:39.742 19:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:39.742 19:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:39.742 19:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:39.742 19:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:39.742 19:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:39.742 19:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:39.742 19:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:39.742 19:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:39.742 19:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.742 19:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:39.742 19:09:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.742 19:09:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.742 19:09:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.742 19:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:39.742 "name": "raid_bdev1", 00:11:39.742 "uuid": "73bafa63-30ec-434e-b9c1-ba3ca5919cf9", 00:11:39.742 "strip_size_kb": 64, 00:11:39.742 "state": "configuring", 00:11:39.742 "raid_level": "raid0", 00:11:39.742 "superblock": true, 00:11:39.742 "num_base_bdevs": 4, 00:11:39.742 "num_base_bdevs_discovered": 1, 00:11:39.742 "num_base_bdevs_operational": 4, 00:11:39.742 "base_bdevs_list": [ 00:11:39.742 { 00:11:39.742 "name": "pt1", 00:11:39.742 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:39.742 "is_configured": true, 00:11:39.742 "data_offset": 2048, 00:11:39.742 "data_size": 63488 00:11:39.742 }, 00:11:39.742 { 00:11:39.742 "name": null, 00:11:39.742 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:39.742 "is_configured": false, 00:11:39.742 "data_offset": 0, 00:11:39.742 "data_size": 63488 00:11:39.742 }, 00:11:39.742 { 00:11:39.742 "name": null, 00:11:39.742 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:39.742 "is_configured": false, 00:11:39.742 "data_offset": 2048, 00:11:39.742 "data_size": 63488 00:11:39.742 }, 00:11:39.742 { 00:11:39.742 "name": null, 00:11:39.742 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:39.742 "is_configured": false, 00:11:39.742 "data_offset": 2048, 00:11:39.742 "data_size": 63488 00:11:39.742 } 00:11:39.742 ] 00:11:39.742 }' 00:11:39.742 19:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:39.742 19:09:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.001 19:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:11:40.001 19:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:40.001 19:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:40.001 19:09:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.001 19:09:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.001 [2024-11-27 19:09:49.598825] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:40.001 [2024-11-27 19:09:49.598904] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:40.001 [2024-11-27 19:09:49.598929] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:11:40.001 [2024-11-27 19:09:49.598939] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:40.001 [2024-11-27 19:09:49.599460] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:40.001 [2024-11-27 19:09:49.599478] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:40.001 [2024-11-27 19:09:49.599574] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:40.001 [2024-11-27 19:09:49.599597] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:40.001 pt2 00:11:40.001 19:09:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.001 19:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:40.001 19:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:40.001 19:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:40.001 19:09:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.001 19:09:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.001 [2024-11-27 19:09:49.610765] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:40.001 [2024-11-27 19:09:49.610819] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:40.001 [2024-11-27 19:09:49.610839] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:11:40.001 [2024-11-27 19:09:49.610847] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:40.001 [2024-11-27 19:09:49.611280] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:40.001 [2024-11-27 19:09:49.611296] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:40.001 [2024-11-27 19:09:49.611367] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:40.001 [2024-11-27 19:09:49.611393] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:40.001 pt3 00:11:40.001 19:09:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.001 19:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:40.001 19:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:40.001 19:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:40.001 19:09:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.001 19:09:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.001 [2024-11-27 19:09:49.622708] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:40.001 [2024-11-27 19:09:49.622751] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:40.001 [2024-11-27 19:09:49.622767] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:11:40.001 [2024-11-27 19:09:49.622774] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:40.001 [2024-11-27 19:09:49.623140] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:40.001 [2024-11-27 19:09:49.623155] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:40.001 [2024-11-27 19:09:49.623228] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:11:40.001 [2024-11-27 19:09:49.623249] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:40.001 [2024-11-27 19:09:49.623375] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:40.001 [2024-11-27 19:09:49.623383] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:40.001 [2024-11-27 19:09:49.623622] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:11:40.001 [2024-11-27 19:09:49.623782] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:40.001 [2024-11-27 19:09:49.623801] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:11:40.001 [2024-11-27 19:09:49.623931] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:40.001 pt4 00:11:40.001 19:09:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.001 19:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:40.001 19:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:40.001 19:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:11:40.001 19:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:40.001 19:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:40.001 19:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:40.001 19:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:40.001 19:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:40.001 19:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:40.001 19:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:40.001 19:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:40.001 19:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:40.001 19:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:40.261 19:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:40.261 19:09:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.261 19:09:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.261 19:09:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.261 19:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:40.261 "name": "raid_bdev1", 00:11:40.261 "uuid": "73bafa63-30ec-434e-b9c1-ba3ca5919cf9", 00:11:40.261 "strip_size_kb": 64, 00:11:40.261 "state": "online", 00:11:40.261 "raid_level": "raid0", 00:11:40.261 "superblock": true, 00:11:40.261 "num_base_bdevs": 4, 00:11:40.261 "num_base_bdevs_discovered": 4, 00:11:40.261 "num_base_bdevs_operational": 4, 00:11:40.261 "base_bdevs_list": [ 00:11:40.261 { 00:11:40.261 "name": "pt1", 00:11:40.261 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:40.261 "is_configured": true, 00:11:40.261 "data_offset": 2048, 00:11:40.261 "data_size": 63488 00:11:40.261 }, 00:11:40.261 { 00:11:40.261 "name": "pt2", 00:11:40.261 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:40.261 "is_configured": true, 00:11:40.261 "data_offset": 2048, 00:11:40.261 "data_size": 63488 00:11:40.261 }, 00:11:40.261 { 00:11:40.261 "name": "pt3", 00:11:40.261 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:40.261 "is_configured": true, 00:11:40.261 "data_offset": 2048, 00:11:40.261 "data_size": 63488 00:11:40.261 }, 00:11:40.261 { 00:11:40.261 "name": "pt4", 00:11:40.261 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:40.261 "is_configured": true, 00:11:40.261 "data_offset": 2048, 00:11:40.261 "data_size": 63488 00:11:40.261 } 00:11:40.261 ] 00:11:40.261 }' 00:11:40.261 19:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:40.261 19:09:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.521 19:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:11:40.521 19:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:40.521 19:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:40.521 19:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:40.521 19:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:40.521 19:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:40.521 19:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:40.521 19:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:40.521 19:09:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.522 19:09:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.522 [2024-11-27 19:09:50.074291] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:40.522 19:09:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.522 19:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:40.522 "name": "raid_bdev1", 00:11:40.522 "aliases": [ 00:11:40.522 "73bafa63-30ec-434e-b9c1-ba3ca5919cf9" 00:11:40.522 ], 00:11:40.522 "product_name": "Raid Volume", 00:11:40.522 "block_size": 512, 00:11:40.522 "num_blocks": 253952, 00:11:40.522 "uuid": "73bafa63-30ec-434e-b9c1-ba3ca5919cf9", 00:11:40.522 "assigned_rate_limits": { 00:11:40.522 "rw_ios_per_sec": 0, 00:11:40.522 "rw_mbytes_per_sec": 0, 00:11:40.522 "r_mbytes_per_sec": 0, 00:11:40.522 "w_mbytes_per_sec": 0 00:11:40.522 }, 00:11:40.522 "claimed": false, 00:11:40.522 "zoned": false, 00:11:40.522 "supported_io_types": { 00:11:40.522 "read": true, 00:11:40.522 "write": true, 00:11:40.522 "unmap": true, 00:11:40.522 "flush": true, 00:11:40.522 "reset": true, 00:11:40.522 "nvme_admin": false, 00:11:40.522 "nvme_io": false, 00:11:40.522 "nvme_io_md": false, 00:11:40.522 "write_zeroes": true, 00:11:40.522 "zcopy": false, 00:11:40.522 "get_zone_info": false, 00:11:40.522 "zone_management": false, 00:11:40.522 "zone_append": false, 00:11:40.522 "compare": false, 00:11:40.522 "compare_and_write": false, 00:11:40.522 "abort": false, 00:11:40.522 "seek_hole": false, 00:11:40.522 "seek_data": false, 00:11:40.522 "copy": false, 00:11:40.522 "nvme_iov_md": false 00:11:40.522 }, 00:11:40.522 "memory_domains": [ 00:11:40.522 { 00:11:40.522 "dma_device_id": "system", 00:11:40.522 "dma_device_type": 1 00:11:40.522 }, 00:11:40.522 { 00:11:40.522 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:40.522 "dma_device_type": 2 00:11:40.522 }, 00:11:40.522 { 00:11:40.522 "dma_device_id": "system", 00:11:40.522 "dma_device_type": 1 00:11:40.522 }, 00:11:40.522 { 00:11:40.522 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:40.522 "dma_device_type": 2 00:11:40.522 }, 00:11:40.522 { 00:11:40.522 "dma_device_id": "system", 00:11:40.522 "dma_device_type": 1 00:11:40.522 }, 00:11:40.522 { 00:11:40.522 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:40.522 "dma_device_type": 2 00:11:40.522 }, 00:11:40.522 { 00:11:40.522 "dma_device_id": "system", 00:11:40.522 "dma_device_type": 1 00:11:40.522 }, 00:11:40.522 { 00:11:40.522 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:40.522 "dma_device_type": 2 00:11:40.522 } 00:11:40.522 ], 00:11:40.522 "driver_specific": { 00:11:40.522 "raid": { 00:11:40.522 "uuid": "73bafa63-30ec-434e-b9c1-ba3ca5919cf9", 00:11:40.522 "strip_size_kb": 64, 00:11:40.522 "state": "online", 00:11:40.522 "raid_level": "raid0", 00:11:40.522 "superblock": true, 00:11:40.522 "num_base_bdevs": 4, 00:11:40.522 "num_base_bdevs_discovered": 4, 00:11:40.522 "num_base_bdevs_operational": 4, 00:11:40.522 "base_bdevs_list": [ 00:11:40.522 { 00:11:40.522 "name": "pt1", 00:11:40.522 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:40.522 "is_configured": true, 00:11:40.522 "data_offset": 2048, 00:11:40.522 "data_size": 63488 00:11:40.522 }, 00:11:40.522 { 00:11:40.522 "name": "pt2", 00:11:40.522 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:40.522 "is_configured": true, 00:11:40.522 "data_offset": 2048, 00:11:40.522 "data_size": 63488 00:11:40.522 }, 00:11:40.522 { 00:11:40.522 "name": "pt3", 00:11:40.522 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:40.522 "is_configured": true, 00:11:40.522 "data_offset": 2048, 00:11:40.522 "data_size": 63488 00:11:40.522 }, 00:11:40.522 { 00:11:40.522 "name": "pt4", 00:11:40.522 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:40.522 "is_configured": true, 00:11:40.522 "data_offset": 2048, 00:11:40.522 "data_size": 63488 00:11:40.522 } 00:11:40.522 ] 00:11:40.522 } 00:11:40.522 } 00:11:40.522 }' 00:11:40.522 19:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:40.522 19:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:40.522 pt2 00:11:40.522 pt3 00:11:40.522 pt4' 00:11:40.522 19:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:40.782 19:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:40.782 19:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:40.782 19:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:40.782 19:09:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.782 19:09:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.782 19:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:40.782 19:09:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.782 19:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:40.782 19:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:40.782 19:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:40.782 19:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:40.782 19:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:40.782 19:09:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.782 19:09:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.782 19:09:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.782 19:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:40.783 19:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:40.783 19:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:40.783 19:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:40.783 19:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:40.783 19:09:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.783 19:09:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.783 19:09:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.783 19:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:40.783 19:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:40.783 19:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:40.783 19:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:11:40.783 19:09:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.783 19:09:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.783 19:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:40.783 19:09:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.783 19:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:40.783 19:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:40.783 19:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:40.783 19:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:11:40.783 19:09:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.783 19:09:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.783 [2024-11-27 19:09:50.405714] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:41.043 19:09:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.043 19:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 73bafa63-30ec-434e-b9c1-ba3ca5919cf9 '!=' 73bafa63-30ec-434e-b9c1-ba3ca5919cf9 ']' 00:11:41.043 19:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:11:41.043 19:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:41.043 19:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:41.043 19:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 70823 00:11:41.043 19:09:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 70823 ']' 00:11:41.043 19:09:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 70823 00:11:41.043 19:09:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:11:41.043 19:09:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:41.043 19:09:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70823 00:11:41.043 19:09:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:41.043 19:09:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:41.043 killing process with pid 70823 00:11:41.043 19:09:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70823' 00:11:41.043 19:09:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 70823 00:11:41.043 [2024-11-27 19:09:50.481518] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:41.043 [2024-11-27 19:09:50.481627] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:41.043 19:09:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 70823 00:11:41.043 [2024-11-27 19:09:50.481726] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:41.043 [2024-11-27 19:09:50.481738] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:11:41.303 [2024-11-27 19:09:50.908475] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:42.684 19:09:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:11:42.684 00:11:42.684 real 0m5.732s 00:11:42.684 user 0m7.991s 00:11:42.684 sys 0m1.118s 00:11:42.684 19:09:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:42.684 19:09:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.684 ************************************ 00:11:42.684 END TEST raid_superblock_test 00:11:42.684 ************************************ 00:11:42.684 19:09:52 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 4 read 00:11:42.684 19:09:52 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:42.684 19:09:52 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:42.684 19:09:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:42.684 ************************************ 00:11:42.684 START TEST raid_read_error_test 00:11:42.684 ************************************ 00:11:42.684 19:09:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 4 read 00:11:42.684 19:09:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:11:42.684 19:09:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:42.684 19:09:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:11:42.684 19:09:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:42.684 19:09:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:42.684 19:09:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:42.684 19:09:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:42.684 19:09:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:42.684 19:09:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:42.684 19:09:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:42.684 19:09:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:42.684 19:09:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:42.684 19:09:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:42.684 19:09:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:42.684 19:09:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:42.684 19:09:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:42.684 19:09:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:42.684 19:09:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:42.684 19:09:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:42.684 19:09:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:42.684 19:09:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:42.684 19:09:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:42.684 19:09:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:42.684 19:09:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:42.684 19:09:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:11:42.684 19:09:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:42.684 19:09:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:42.684 19:09:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:42.684 19:09:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.NqfLYYC7bQ 00:11:42.684 19:09:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=71091 00:11:42.684 19:09:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:42.684 19:09:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 71091 00:11:42.684 19:09:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 71091 ']' 00:11:42.684 19:09:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:42.684 19:09:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:42.684 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:42.684 19:09:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:42.684 19:09:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:42.684 19:09:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.685 [2024-11-27 19:09:52.310917] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:11:42.685 [2024-11-27 19:09:52.311046] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71091 ] 00:11:42.945 [2024-11-27 19:09:52.491062] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:43.205 [2024-11-27 19:09:52.625164] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:43.464 [2024-11-27 19:09:52.859227] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:43.464 [2024-11-27 19:09:52.859305] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:43.725 19:09:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:43.725 19:09:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:43.725 19:09:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:43.725 19:09:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:43.725 19:09:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.725 19:09:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.725 BaseBdev1_malloc 00:11:43.725 19:09:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.725 19:09:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:43.725 19:09:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.725 19:09:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.725 true 00:11:43.725 19:09:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.725 19:09:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:43.725 19:09:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.725 19:09:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.725 [2024-11-27 19:09:53.202179] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:43.725 [2024-11-27 19:09:53.202239] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:43.725 [2024-11-27 19:09:53.202258] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:43.725 [2024-11-27 19:09:53.202271] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:43.725 [2024-11-27 19:09:53.204601] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:43.725 [2024-11-27 19:09:53.204641] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:43.725 BaseBdev1 00:11:43.725 19:09:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.725 19:09:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:43.725 19:09:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:43.725 19:09:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.725 19:09:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.725 BaseBdev2_malloc 00:11:43.725 19:09:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.725 19:09:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:43.725 19:09:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.725 19:09:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.725 true 00:11:43.725 19:09:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.725 19:09:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:43.725 19:09:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.725 19:09:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.725 [2024-11-27 19:09:53.274517] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:43.725 [2024-11-27 19:09:53.274573] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:43.725 [2024-11-27 19:09:53.274589] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:43.725 [2024-11-27 19:09:53.274601] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:43.725 [2024-11-27 19:09:53.276986] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:43.725 [2024-11-27 19:09:53.277025] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:43.725 BaseBdev2 00:11:43.725 19:09:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.725 19:09:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:43.725 19:09:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:43.725 19:09:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.725 19:09:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.725 BaseBdev3_malloc 00:11:43.985 19:09:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.985 19:09:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:43.985 19:09:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.985 19:09:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.985 true 00:11:43.985 19:09:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.985 19:09:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:43.985 19:09:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.985 19:09:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.985 [2024-11-27 19:09:53.378496] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:43.985 [2024-11-27 19:09:53.378549] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:43.985 [2024-11-27 19:09:53.378566] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:43.985 [2024-11-27 19:09:53.378578] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:43.985 [2024-11-27 19:09:53.380981] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:43.985 [2024-11-27 19:09:53.381029] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:43.985 BaseBdev3 00:11:43.985 19:09:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.985 19:09:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:43.985 19:09:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:43.985 19:09:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.985 19:09:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.985 BaseBdev4_malloc 00:11:43.985 19:09:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.985 19:09:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:11:43.985 19:09:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.985 19:09:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.985 true 00:11:43.985 19:09:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.985 19:09:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:11:43.985 19:09:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.985 19:09:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.985 [2024-11-27 19:09:53.452840] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:11:43.985 [2024-11-27 19:09:53.452893] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:43.985 [2024-11-27 19:09:53.452911] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:43.985 [2024-11-27 19:09:53.452921] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:43.985 [2024-11-27 19:09:53.455194] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:43.985 [2024-11-27 19:09:53.455247] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:43.985 BaseBdev4 00:11:43.985 19:09:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.985 19:09:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:11:43.985 19:09:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.985 19:09:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.985 [2024-11-27 19:09:53.464885] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:43.985 [2024-11-27 19:09:53.466912] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:43.985 [2024-11-27 19:09:53.466987] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:43.985 [2024-11-27 19:09:53.467043] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:43.986 [2024-11-27 19:09:53.467285] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:11:43.986 [2024-11-27 19:09:53.467311] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:43.986 [2024-11-27 19:09:53.467560] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:11:43.986 [2024-11-27 19:09:53.467748] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:11:43.986 [2024-11-27 19:09:53.467767] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:11:43.986 [2024-11-27 19:09:53.467925] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:43.986 19:09:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.986 19:09:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:11:43.986 19:09:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:43.986 19:09:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:43.986 19:09:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:43.986 19:09:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:43.986 19:09:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:43.986 19:09:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:43.986 19:09:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:43.986 19:09:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:43.986 19:09:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:43.986 19:09:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:43.986 19:09:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.986 19:09:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:43.986 19:09:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.986 19:09:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.986 19:09:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:43.986 "name": "raid_bdev1", 00:11:43.986 "uuid": "29697f95-893a-4a61-81fd-b58289cc0d0d", 00:11:43.986 "strip_size_kb": 64, 00:11:43.986 "state": "online", 00:11:43.986 "raid_level": "raid0", 00:11:43.986 "superblock": true, 00:11:43.986 "num_base_bdevs": 4, 00:11:43.986 "num_base_bdevs_discovered": 4, 00:11:43.986 "num_base_bdevs_operational": 4, 00:11:43.986 "base_bdevs_list": [ 00:11:43.986 { 00:11:43.986 "name": "BaseBdev1", 00:11:43.986 "uuid": "3284a8ec-10b6-5e6d-b360-4db297965abe", 00:11:43.986 "is_configured": true, 00:11:43.986 "data_offset": 2048, 00:11:43.986 "data_size": 63488 00:11:43.986 }, 00:11:43.986 { 00:11:43.986 "name": "BaseBdev2", 00:11:43.986 "uuid": "8b32bbd2-4763-53c5-939c-762f303a4025", 00:11:43.986 "is_configured": true, 00:11:43.986 "data_offset": 2048, 00:11:43.986 "data_size": 63488 00:11:43.986 }, 00:11:43.986 { 00:11:43.986 "name": "BaseBdev3", 00:11:43.986 "uuid": "b1394667-ab7f-5c98-a285-6a639ee7138f", 00:11:43.986 "is_configured": true, 00:11:43.986 "data_offset": 2048, 00:11:43.986 "data_size": 63488 00:11:43.986 }, 00:11:43.986 { 00:11:43.986 "name": "BaseBdev4", 00:11:43.986 "uuid": "ef4f796d-9c4b-5277-9fdc-bfad3a10e71d", 00:11:43.986 "is_configured": true, 00:11:43.986 "data_offset": 2048, 00:11:43.986 "data_size": 63488 00:11:43.986 } 00:11:43.986 ] 00:11:43.986 }' 00:11:43.986 19:09:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:43.986 19:09:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.556 19:09:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:44.556 19:09:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:44.556 [2024-11-27 19:09:53.985416] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:11:45.549 19:09:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:11:45.549 19:09:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.549 19:09:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.549 19:09:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.549 19:09:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:45.549 19:09:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:11:45.549 19:09:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:11:45.549 19:09:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:11:45.549 19:09:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:45.549 19:09:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:45.549 19:09:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:45.549 19:09:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:45.549 19:09:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:45.549 19:09:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:45.549 19:09:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:45.549 19:09:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:45.549 19:09:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:45.549 19:09:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:45.549 19:09:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:45.549 19:09:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.549 19:09:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.550 19:09:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.550 19:09:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:45.550 "name": "raid_bdev1", 00:11:45.550 "uuid": "29697f95-893a-4a61-81fd-b58289cc0d0d", 00:11:45.550 "strip_size_kb": 64, 00:11:45.550 "state": "online", 00:11:45.550 "raid_level": "raid0", 00:11:45.550 "superblock": true, 00:11:45.550 "num_base_bdevs": 4, 00:11:45.550 "num_base_bdevs_discovered": 4, 00:11:45.550 "num_base_bdevs_operational": 4, 00:11:45.550 "base_bdevs_list": [ 00:11:45.550 { 00:11:45.550 "name": "BaseBdev1", 00:11:45.550 "uuid": "3284a8ec-10b6-5e6d-b360-4db297965abe", 00:11:45.550 "is_configured": true, 00:11:45.550 "data_offset": 2048, 00:11:45.550 "data_size": 63488 00:11:45.550 }, 00:11:45.550 { 00:11:45.550 "name": "BaseBdev2", 00:11:45.550 "uuid": "8b32bbd2-4763-53c5-939c-762f303a4025", 00:11:45.550 "is_configured": true, 00:11:45.550 "data_offset": 2048, 00:11:45.550 "data_size": 63488 00:11:45.550 }, 00:11:45.550 { 00:11:45.550 "name": "BaseBdev3", 00:11:45.550 "uuid": "b1394667-ab7f-5c98-a285-6a639ee7138f", 00:11:45.550 "is_configured": true, 00:11:45.550 "data_offset": 2048, 00:11:45.550 "data_size": 63488 00:11:45.550 }, 00:11:45.550 { 00:11:45.550 "name": "BaseBdev4", 00:11:45.550 "uuid": "ef4f796d-9c4b-5277-9fdc-bfad3a10e71d", 00:11:45.550 "is_configured": true, 00:11:45.550 "data_offset": 2048, 00:11:45.550 "data_size": 63488 00:11:45.550 } 00:11:45.550 ] 00:11:45.550 }' 00:11:45.550 19:09:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:45.550 19:09:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.811 19:09:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:45.811 19:09:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.811 19:09:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.811 [2024-11-27 19:09:55.390927] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:45.811 [2024-11-27 19:09:55.390969] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:45.811 [2024-11-27 19:09:55.393717] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:45.811 [2024-11-27 19:09:55.393786] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:45.811 [2024-11-27 19:09:55.393834] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:45.811 [2024-11-27 19:09:55.393846] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:11:45.811 { 00:11:45.811 "results": [ 00:11:45.811 { 00:11:45.811 "job": "raid_bdev1", 00:11:45.811 "core_mask": "0x1", 00:11:45.811 "workload": "randrw", 00:11:45.811 "percentage": 50, 00:11:45.811 "status": "finished", 00:11:45.811 "queue_depth": 1, 00:11:45.811 "io_size": 131072, 00:11:45.811 "runtime": 1.40623, 00:11:45.811 "iops": 13427.390967338202, 00:11:45.811 "mibps": 1678.4238709172753, 00:11:45.811 "io_failed": 1, 00:11:45.811 "io_timeout": 0, 00:11:45.811 "avg_latency_us": 104.66310276080677, 00:11:45.811 "min_latency_us": 24.705676855895195, 00:11:45.811 "max_latency_us": 1402.2986899563318 00:11:45.811 } 00:11:45.811 ], 00:11:45.811 "core_count": 1 00:11:45.811 } 00:11:45.811 19:09:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.811 19:09:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 71091 00:11:45.811 19:09:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 71091 ']' 00:11:45.811 19:09:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 71091 00:11:45.811 19:09:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:11:45.811 19:09:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:45.811 19:09:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71091 00:11:45.811 19:09:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:45.811 19:09:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:45.811 killing process with pid 71091 00:11:45.811 19:09:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71091' 00:11:45.811 19:09:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 71091 00:11:45.811 [2024-11-27 19:09:55.439738] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:45.811 19:09:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 71091 00:11:46.381 [2024-11-27 19:09:55.795280] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:47.764 19:09:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.NqfLYYC7bQ 00:11:47.764 19:09:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:47.764 19:09:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:47.764 19:09:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:11:47.764 19:09:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:11:47.764 19:09:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:47.764 19:09:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:47.764 19:09:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:11:47.764 00:11:47.764 real 0m4.881s 00:11:47.764 user 0m5.577s 00:11:47.764 sys 0m0.734s 00:11:47.764 19:09:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:47.764 19:09:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.764 ************************************ 00:11:47.764 END TEST raid_read_error_test 00:11:47.764 ************************************ 00:11:47.764 19:09:57 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 4 write 00:11:47.764 19:09:57 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:47.764 19:09:57 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:47.764 19:09:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:47.764 ************************************ 00:11:47.764 START TEST raid_write_error_test 00:11:47.764 ************************************ 00:11:47.764 19:09:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 4 write 00:11:47.764 19:09:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:11:47.764 19:09:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:47.764 19:09:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:11:47.764 19:09:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:47.764 19:09:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:47.764 19:09:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:47.764 19:09:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:47.764 19:09:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:47.764 19:09:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:47.764 19:09:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:47.765 19:09:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:47.765 19:09:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:47.765 19:09:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:47.765 19:09:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:47.765 19:09:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:47.765 19:09:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:47.765 19:09:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:47.765 19:09:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:47.765 19:09:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:47.765 19:09:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:47.765 19:09:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:47.765 19:09:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:47.765 19:09:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:47.765 19:09:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:47.765 19:09:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:11:47.765 19:09:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:47.765 19:09:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:47.765 19:09:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:47.765 19:09:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.cAl9Bn0fzl 00:11:47.765 19:09:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:47.765 19:09:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=71237 00:11:47.765 19:09:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 71237 00:11:47.765 19:09:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 71237 ']' 00:11:47.765 19:09:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:47.765 19:09:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:47.765 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:47.765 19:09:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:47.765 19:09:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:47.765 19:09:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.765 [2024-11-27 19:09:57.256953] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:11:47.765 [2024-11-27 19:09:57.257078] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71237 ] 00:11:48.025 [2024-11-27 19:09:57.421137] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:48.025 [2024-11-27 19:09:57.560564] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:48.285 [2024-11-27 19:09:57.800674] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:48.285 [2024-11-27 19:09:57.800746] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:48.545 19:09:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:48.545 19:09:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:48.545 19:09:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:48.546 19:09:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:48.546 19:09:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.546 19:09:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.546 BaseBdev1_malloc 00:11:48.546 19:09:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.546 19:09:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:48.546 19:09:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.546 19:09:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.546 true 00:11:48.546 19:09:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.546 19:09:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:48.546 19:09:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.546 19:09:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.546 [2024-11-27 19:09:58.162918] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:48.546 [2024-11-27 19:09:58.162976] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:48.546 [2024-11-27 19:09:58.162997] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:48.546 [2024-11-27 19:09:58.163009] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:48.546 [2024-11-27 19:09:58.165432] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:48.546 [2024-11-27 19:09:58.165472] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:48.546 BaseBdev1 00:11:48.546 19:09:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.546 19:09:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:48.546 19:09:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:48.546 19:09:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.546 19:09:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.807 BaseBdev2_malloc 00:11:48.807 19:09:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.807 19:09:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:48.807 19:09:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.807 19:09:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.807 true 00:11:48.807 19:09:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.807 19:09:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:48.807 19:09:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.807 19:09:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.807 [2024-11-27 19:09:58.236507] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:48.807 [2024-11-27 19:09:58.236566] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:48.807 [2024-11-27 19:09:58.236583] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:48.807 [2024-11-27 19:09:58.236608] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:48.807 [2024-11-27 19:09:58.238991] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:48.807 [2024-11-27 19:09:58.239028] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:48.807 BaseBdev2 00:11:48.807 19:09:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.807 19:09:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:48.807 19:09:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:48.807 19:09:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.807 19:09:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.807 BaseBdev3_malloc 00:11:48.807 19:09:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.807 19:09:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:48.807 19:09:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.807 19:09:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.807 true 00:11:48.807 19:09:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.807 19:09:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:48.807 19:09:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.807 19:09:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.807 [2024-11-27 19:09:58.317938] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:48.807 [2024-11-27 19:09:58.317990] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:48.807 [2024-11-27 19:09:58.318008] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:48.807 [2024-11-27 19:09:58.318020] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:48.807 [2024-11-27 19:09:58.320393] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:48.807 [2024-11-27 19:09:58.320432] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:48.807 BaseBdev3 00:11:48.807 19:09:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.807 19:09:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:48.807 19:09:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:48.807 19:09:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.807 19:09:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.807 BaseBdev4_malloc 00:11:48.807 19:09:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.807 19:09:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:11:48.807 19:09:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.807 19:09:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.807 true 00:11:48.807 19:09:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.807 19:09:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:11:48.807 19:09:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.807 19:09:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.807 [2024-11-27 19:09:58.392821] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:11:48.807 [2024-11-27 19:09:58.392874] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:48.807 [2024-11-27 19:09:58.392891] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:48.807 [2024-11-27 19:09:58.392903] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:48.807 [2024-11-27 19:09:58.395230] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:48.807 [2024-11-27 19:09:58.395286] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:48.807 BaseBdev4 00:11:48.807 19:09:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.807 19:09:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:11:48.807 19:09:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.807 19:09:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.807 [2024-11-27 19:09:58.404875] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:48.807 [2024-11-27 19:09:58.406972] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:48.807 [2024-11-27 19:09:58.407049] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:48.807 [2024-11-27 19:09:58.407111] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:48.807 [2024-11-27 19:09:58.407336] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:11:48.807 [2024-11-27 19:09:58.407361] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:48.807 [2024-11-27 19:09:58.407613] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:11:48.807 [2024-11-27 19:09:58.407803] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:11:48.807 [2024-11-27 19:09:58.407821] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:11:48.807 [2024-11-27 19:09:58.407981] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:48.807 19:09:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.807 19:09:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:11:48.808 19:09:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:48.808 19:09:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:48.808 19:09:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:48.808 19:09:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:48.808 19:09:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:48.808 19:09:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:48.808 19:09:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:48.808 19:09:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:48.808 19:09:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:48.808 19:09:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.808 19:09:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:48.808 19:09:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.808 19:09:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.808 19:09:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.068 19:09:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:49.068 "name": "raid_bdev1", 00:11:49.068 "uuid": "33db4af6-06a6-4013-9c6f-7a3bac0f330c", 00:11:49.068 "strip_size_kb": 64, 00:11:49.068 "state": "online", 00:11:49.068 "raid_level": "raid0", 00:11:49.068 "superblock": true, 00:11:49.068 "num_base_bdevs": 4, 00:11:49.068 "num_base_bdevs_discovered": 4, 00:11:49.068 "num_base_bdevs_operational": 4, 00:11:49.068 "base_bdevs_list": [ 00:11:49.068 { 00:11:49.068 "name": "BaseBdev1", 00:11:49.068 "uuid": "4516c221-e56b-574f-9414-55869aac51ae", 00:11:49.068 "is_configured": true, 00:11:49.068 "data_offset": 2048, 00:11:49.068 "data_size": 63488 00:11:49.068 }, 00:11:49.068 { 00:11:49.068 "name": "BaseBdev2", 00:11:49.068 "uuid": "5a282237-c5d6-5c5a-b556-c05438e345bf", 00:11:49.068 "is_configured": true, 00:11:49.068 "data_offset": 2048, 00:11:49.069 "data_size": 63488 00:11:49.069 }, 00:11:49.069 { 00:11:49.069 "name": "BaseBdev3", 00:11:49.069 "uuid": "db8e2a76-61b0-590b-ba43-9bc8f9005dc1", 00:11:49.069 "is_configured": true, 00:11:49.069 "data_offset": 2048, 00:11:49.069 "data_size": 63488 00:11:49.069 }, 00:11:49.069 { 00:11:49.069 "name": "BaseBdev4", 00:11:49.069 "uuid": "77479c69-00af-57cf-a9ae-2ec4ae1833ae", 00:11:49.069 "is_configured": true, 00:11:49.069 "data_offset": 2048, 00:11:49.069 "data_size": 63488 00:11:49.069 } 00:11:49.069 ] 00:11:49.069 }' 00:11:49.069 19:09:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:49.069 19:09:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.329 19:09:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:49.330 19:09:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:49.330 [2024-11-27 19:09:58.941303] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:11:50.268 19:09:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:11:50.268 19:09:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.268 19:09:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.268 19:09:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.268 19:09:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:50.268 19:09:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:11:50.268 19:09:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:11:50.268 19:09:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:11:50.268 19:09:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:50.268 19:09:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:50.268 19:09:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:50.268 19:09:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:50.268 19:09:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:50.268 19:09:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:50.268 19:09:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:50.268 19:09:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:50.268 19:09:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:50.268 19:09:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:50.268 19:09:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:50.268 19:09:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.268 19:09:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.268 19:09:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.528 19:09:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:50.528 "name": "raid_bdev1", 00:11:50.528 "uuid": "33db4af6-06a6-4013-9c6f-7a3bac0f330c", 00:11:50.528 "strip_size_kb": 64, 00:11:50.528 "state": "online", 00:11:50.528 "raid_level": "raid0", 00:11:50.528 "superblock": true, 00:11:50.528 "num_base_bdevs": 4, 00:11:50.528 "num_base_bdevs_discovered": 4, 00:11:50.528 "num_base_bdevs_operational": 4, 00:11:50.528 "base_bdevs_list": [ 00:11:50.528 { 00:11:50.528 "name": "BaseBdev1", 00:11:50.528 "uuid": "4516c221-e56b-574f-9414-55869aac51ae", 00:11:50.528 "is_configured": true, 00:11:50.528 "data_offset": 2048, 00:11:50.528 "data_size": 63488 00:11:50.528 }, 00:11:50.528 { 00:11:50.528 "name": "BaseBdev2", 00:11:50.528 "uuid": "5a282237-c5d6-5c5a-b556-c05438e345bf", 00:11:50.528 "is_configured": true, 00:11:50.528 "data_offset": 2048, 00:11:50.528 "data_size": 63488 00:11:50.528 }, 00:11:50.528 { 00:11:50.528 "name": "BaseBdev3", 00:11:50.528 "uuid": "db8e2a76-61b0-590b-ba43-9bc8f9005dc1", 00:11:50.528 "is_configured": true, 00:11:50.528 "data_offset": 2048, 00:11:50.528 "data_size": 63488 00:11:50.528 }, 00:11:50.528 { 00:11:50.528 "name": "BaseBdev4", 00:11:50.529 "uuid": "77479c69-00af-57cf-a9ae-2ec4ae1833ae", 00:11:50.529 "is_configured": true, 00:11:50.529 "data_offset": 2048, 00:11:50.529 "data_size": 63488 00:11:50.529 } 00:11:50.529 ] 00:11:50.529 }' 00:11:50.529 19:09:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:50.529 19:09:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.788 19:10:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:50.788 19:10:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.788 19:10:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.788 [2024-11-27 19:10:00.314327] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:50.788 [2024-11-27 19:10:00.314370] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:50.788 [2024-11-27 19:10:00.317192] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:50.788 [2024-11-27 19:10:00.317261] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:50.788 [2024-11-27 19:10:00.317311] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:50.788 [2024-11-27 19:10:00.317324] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:11:50.788 19:10:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.788 { 00:11:50.788 "results": [ 00:11:50.788 { 00:11:50.788 "job": "raid_bdev1", 00:11:50.788 "core_mask": "0x1", 00:11:50.788 "workload": "randrw", 00:11:50.788 "percentage": 50, 00:11:50.788 "status": "finished", 00:11:50.788 "queue_depth": 1, 00:11:50.788 "io_size": 131072, 00:11:50.788 "runtime": 1.37367, 00:11:50.788 "iops": 13333.624524085115, 00:11:50.788 "mibps": 1666.7030655106394, 00:11:50.788 "io_failed": 1, 00:11:50.788 "io_timeout": 0, 00:11:50.788 "avg_latency_us": 105.46290846334793, 00:11:50.788 "min_latency_us": 25.4882096069869, 00:11:50.789 "max_latency_us": 1380.8349344978167 00:11:50.789 } 00:11:50.789 ], 00:11:50.789 "core_count": 1 00:11:50.789 } 00:11:50.789 19:10:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 71237 00:11:50.789 19:10:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 71237 ']' 00:11:50.789 19:10:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 71237 00:11:50.789 19:10:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:11:50.789 19:10:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:50.789 19:10:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71237 00:11:50.789 19:10:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:50.789 19:10:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:50.789 killing process with pid 71237 00:11:50.789 19:10:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71237' 00:11:50.789 19:10:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 71237 00:11:50.789 [2024-11-27 19:10:00.363382] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:50.789 19:10:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 71237 00:11:51.359 [2024-11-27 19:10:00.718824] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:52.743 19:10:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.cAl9Bn0fzl 00:11:52.743 19:10:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:52.743 19:10:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:52.743 19:10:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:11:52.743 19:10:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:11:52.743 19:10:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:52.743 19:10:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:52.743 19:10:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:11:52.743 00:11:52.743 real 0m4.869s 00:11:52.743 user 0m5.592s 00:11:52.743 sys 0m0.708s 00:11:52.743 19:10:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:52.743 19:10:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.743 ************************************ 00:11:52.743 END TEST raid_write_error_test 00:11:52.743 ************************************ 00:11:52.743 19:10:02 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:11:52.743 19:10:02 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:11:52.743 19:10:02 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:52.743 19:10:02 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:52.743 19:10:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:52.743 ************************************ 00:11:52.743 START TEST raid_state_function_test 00:11:52.743 ************************************ 00:11:52.743 19:10:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 4 false 00:11:52.743 19:10:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:11:52.743 19:10:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:11:52.743 19:10:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:11:52.743 19:10:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:52.743 19:10:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:52.743 19:10:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:52.743 19:10:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:52.743 19:10:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:52.743 19:10:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:52.743 19:10:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:52.743 19:10:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:52.743 19:10:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:52.743 19:10:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:52.743 19:10:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:52.743 19:10:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:52.743 19:10:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:11:52.743 19:10:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:52.743 19:10:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:52.743 19:10:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:52.743 19:10:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:52.743 19:10:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:52.743 19:10:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:52.743 19:10:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:52.743 19:10:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:52.743 19:10:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:11:52.743 19:10:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:11:52.743 19:10:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:11:52.743 19:10:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:11:52.743 19:10:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:11:52.743 19:10:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=71380 00:11:52.743 19:10:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:52.743 19:10:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 71380' 00:11:52.743 Process raid pid: 71380 00:11:52.743 19:10:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 71380 00:11:52.743 19:10:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 71380 ']' 00:11:52.743 19:10:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:52.743 19:10:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:52.743 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:52.743 19:10:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:52.743 19:10:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:52.743 19:10:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.743 [2024-11-27 19:10:02.198722] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:11:52.743 [2024-11-27 19:10:02.198849] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:53.002 [2024-11-27 19:10:02.378736] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:53.002 [2024-11-27 19:10:02.519781] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:53.261 [2024-11-27 19:10:02.762671] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:53.261 [2024-11-27 19:10:02.762733] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:53.520 19:10:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:53.520 19:10:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:11:53.520 19:10:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:53.520 19:10:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.520 19:10:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.520 [2024-11-27 19:10:03.030168] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:53.520 [2024-11-27 19:10:03.030234] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:53.520 [2024-11-27 19:10:03.030245] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:53.520 [2024-11-27 19:10:03.030255] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:53.520 [2024-11-27 19:10:03.030262] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:53.520 [2024-11-27 19:10:03.030272] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:53.521 [2024-11-27 19:10:03.030278] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:53.521 [2024-11-27 19:10:03.030287] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:53.521 19:10:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.521 19:10:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:53.521 19:10:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:53.521 19:10:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:53.521 19:10:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:53.521 19:10:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:53.521 19:10:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:53.521 19:10:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:53.521 19:10:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:53.521 19:10:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:53.521 19:10:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:53.521 19:10:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:53.521 19:10:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.521 19:10:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.521 19:10:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:53.521 19:10:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.521 19:10:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:53.521 "name": "Existed_Raid", 00:11:53.521 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:53.521 "strip_size_kb": 64, 00:11:53.521 "state": "configuring", 00:11:53.521 "raid_level": "concat", 00:11:53.521 "superblock": false, 00:11:53.521 "num_base_bdevs": 4, 00:11:53.521 "num_base_bdevs_discovered": 0, 00:11:53.521 "num_base_bdevs_operational": 4, 00:11:53.521 "base_bdevs_list": [ 00:11:53.521 { 00:11:53.521 "name": "BaseBdev1", 00:11:53.521 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:53.521 "is_configured": false, 00:11:53.521 "data_offset": 0, 00:11:53.521 "data_size": 0 00:11:53.521 }, 00:11:53.521 { 00:11:53.521 "name": "BaseBdev2", 00:11:53.521 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:53.521 "is_configured": false, 00:11:53.521 "data_offset": 0, 00:11:53.521 "data_size": 0 00:11:53.521 }, 00:11:53.521 { 00:11:53.521 "name": "BaseBdev3", 00:11:53.521 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:53.521 "is_configured": false, 00:11:53.521 "data_offset": 0, 00:11:53.521 "data_size": 0 00:11:53.521 }, 00:11:53.521 { 00:11:53.521 "name": "BaseBdev4", 00:11:53.521 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:53.521 "is_configured": false, 00:11:53.521 "data_offset": 0, 00:11:53.521 "data_size": 0 00:11:53.521 } 00:11:53.521 ] 00:11:53.521 }' 00:11:53.521 19:10:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:53.521 19:10:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.109 19:10:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:54.109 19:10:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.109 19:10:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.109 [2024-11-27 19:10:03.485364] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:54.109 [2024-11-27 19:10:03.485410] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:54.109 19:10:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.109 19:10:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:54.109 19:10:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.109 19:10:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.109 [2024-11-27 19:10:03.497329] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:54.109 [2024-11-27 19:10:03.497375] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:54.109 [2024-11-27 19:10:03.497385] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:54.109 [2024-11-27 19:10:03.497396] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:54.109 [2024-11-27 19:10:03.497402] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:54.109 [2024-11-27 19:10:03.497412] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:54.109 [2024-11-27 19:10:03.497418] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:54.109 [2024-11-27 19:10:03.497427] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:54.109 19:10:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.109 19:10:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:54.109 19:10:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.109 19:10:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.109 [2024-11-27 19:10:03.551955] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:54.109 BaseBdev1 00:11:54.109 19:10:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.109 19:10:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:54.109 19:10:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:54.109 19:10:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:54.109 19:10:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:54.109 19:10:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:54.109 19:10:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:54.109 19:10:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:54.109 19:10:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.109 19:10:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.109 19:10:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.109 19:10:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:54.109 19:10:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.109 19:10:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.109 [ 00:11:54.109 { 00:11:54.109 "name": "BaseBdev1", 00:11:54.109 "aliases": [ 00:11:54.109 "bf209c78-264c-4100-95b9-bd5186fdf58e" 00:11:54.109 ], 00:11:54.109 "product_name": "Malloc disk", 00:11:54.109 "block_size": 512, 00:11:54.109 "num_blocks": 65536, 00:11:54.109 "uuid": "bf209c78-264c-4100-95b9-bd5186fdf58e", 00:11:54.109 "assigned_rate_limits": { 00:11:54.109 "rw_ios_per_sec": 0, 00:11:54.109 "rw_mbytes_per_sec": 0, 00:11:54.109 "r_mbytes_per_sec": 0, 00:11:54.109 "w_mbytes_per_sec": 0 00:11:54.109 }, 00:11:54.109 "claimed": true, 00:11:54.109 "claim_type": "exclusive_write", 00:11:54.109 "zoned": false, 00:11:54.109 "supported_io_types": { 00:11:54.109 "read": true, 00:11:54.109 "write": true, 00:11:54.109 "unmap": true, 00:11:54.109 "flush": true, 00:11:54.109 "reset": true, 00:11:54.109 "nvme_admin": false, 00:11:54.109 "nvme_io": false, 00:11:54.109 "nvme_io_md": false, 00:11:54.109 "write_zeroes": true, 00:11:54.109 "zcopy": true, 00:11:54.109 "get_zone_info": false, 00:11:54.109 "zone_management": false, 00:11:54.109 "zone_append": false, 00:11:54.109 "compare": false, 00:11:54.109 "compare_and_write": false, 00:11:54.109 "abort": true, 00:11:54.109 "seek_hole": false, 00:11:54.109 "seek_data": false, 00:11:54.109 "copy": true, 00:11:54.109 "nvme_iov_md": false 00:11:54.109 }, 00:11:54.109 "memory_domains": [ 00:11:54.109 { 00:11:54.109 "dma_device_id": "system", 00:11:54.109 "dma_device_type": 1 00:11:54.109 }, 00:11:54.109 { 00:11:54.109 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:54.109 "dma_device_type": 2 00:11:54.109 } 00:11:54.109 ], 00:11:54.109 "driver_specific": {} 00:11:54.109 } 00:11:54.109 ] 00:11:54.109 19:10:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.109 19:10:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:54.109 19:10:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:54.109 19:10:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:54.109 19:10:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:54.109 19:10:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:54.109 19:10:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:54.109 19:10:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:54.109 19:10:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:54.109 19:10:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:54.109 19:10:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:54.109 19:10:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:54.109 19:10:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:54.109 19:10:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:54.109 19:10:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.109 19:10:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.109 19:10:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.109 19:10:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:54.109 "name": "Existed_Raid", 00:11:54.109 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:54.109 "strip_size_kb": 64, 00:11:54.109 "state": "configuring", 00:11:54.109 "raid_level": "concat", 00:11:54.109 "superblock": false, 00:11:54.109 "num_base_bdevs": 4, 00:11:54.109 "num_base_bdevs_discovered": 1, 00:11:54.109 "num_base_bdevs_operational": 4, 00:11:54.109 "base_bdevs_list": [ 00:11:54.109 { 00:11:54.109 "name": "BaseBdev1", 00:11:54.109 "uuid": "bf209c78-264c-4100-95b9-bd5186fdf58e", 00:11:54.109 "is_configured": true, 00:11:54.109 "data_offset": 0, 00:11:54.109 "data_size": 65536 00:11:54.109 }, 00:11:54.109 { 00:11:54.109 "name": "BaseBdev2", 00:11:54.109 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:54.109 "is_configured": false, 00:11:54.109 "data_offset": 0, 00:11:54.109 "data_size": 0 00:11:54.109 }, 00:11:54.109 { 00:11:54.109 "name": "BaseBdev3", 00:11:54.109 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:54.109 "is_configured": false, 00:11:54.109 "data_offset": 0, 00:11:54.109 "data_size": 0 00:11:54.109 }, 00:11:54.109 { 00:11:54.109 "name": "BaseBdev4", 00:11:54.110 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:54.110 "is_configured": false, 00:11:54.110 "data_offset": 0, 00:11:54.110 "data_size": 0 00:11:54.110 } 00:11:54.110 ] 00:11:54.110 }' 00:11:54.110 19:10:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:54.110 19:10:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.387 19:10:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:54.387 19:10:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.387 19:10:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.387 [2024-11-27 19:10:03.999284] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:54.387 [2024-11-27 19:10:03.999354] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:54.387 19:10:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.387 19:10:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:54.387 19:10:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.387 19:10:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.387 [2024-11-27 19:10:04.007304] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:54.387 [2024-11-27 19:10:04.009404] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:54.387 [2024-11-27 19:10:04.009451] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:54.387 [2024-11-27 19:10:04.009461] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:54.387 [2024-11-27 19:10:04.009472] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:54.387 [2024-11-27 19:10:04.009480] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:54.387 [2024-11-27 19:10:04.009489] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:54.387 19:10:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.387 19:10:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:54.387 19:10:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:54.387 19:10:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:54.387 19:10:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:54.387 19:10:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:54.387 19:10:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:54.387 19:10:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:54.387 19:10:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:54.387 19:10:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:54.387 19:10:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:54.387 19:10:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:54.387 19:10:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:54.387 19:10:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:54.387 19:10:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:54.387 19:10:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.387 19:10:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.647 19:10:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.647 19:10:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:54.647 "name": "Existed_Raid", 00:11:54.647 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:54.647 "strip_size_kb": 64, 00:11:54.647 "state": "configuring", 00:11:54.647 "raid_level": "concat", 00:11:54.647 "superblock": false, 00:11:54.647 "num_base_bdevs": 4, 00:11:54.647 "num_base_bdevs_discovered": 1, 00:11:54.647 "num_base_bdevs_operational": 4, 00:11:54.647 "base_bdevs_list": [ 00:11:54.647 { 00:11:54.647 "name": "BaseBdev1", 00:11:54.647 "uuid": "bf209c78-264c-4100-95b9-bd5186fdf58e", 00:11:54.647 "is_configured": true, 00:11:54.647 "data_offset": 0, 00:11:54.647 "data_size": 65536 00:11:54.647 }, 00:11:54.647 { 00:11:54.647 "name": "BaseBdev2", 00:11:54.647 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:54.647 "is_configured": false, 00:11:54.647 "data_offset": 0, 00:11:54.647 "data_size": 0 00:11:54.647 }, 00:11:54.647 { 00:11:54.647 "name": "BaseBdev3", 00:11:54.647 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:54.647 "is_configured": false, 00:11:54.647 "data_offset": 0, 00:11:54.647 "data_size": 0 00:11:54.647 }, 00:11:54.647 { 00:11:54.647 "name": "BaseBdev4", 00:11:54.647 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:54.647 "is_configured": false, 00:11:54.647 "data_offset": 0, 00:11:54.647 "data_size": 0 00:11:54.647 } 00:11:54.647 ] 00:11:54.647 }' 00:11:54.647 19:10:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:54.647 19:10:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.907 19:10:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:54.907 19:10:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.907 19:10:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.907 [2024-11-27 19:10:04.467708] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:54.907 BaseBdev2 00:11:54.907 19:10:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.907 19:10:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:54.907 19:10:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:54.907 19:10:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:54.907 19:10:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:54.907 19:10:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:54.907 19:10:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:54.907 19:10:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:54.907 19:10:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.907 19:10:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.907 19:10:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.907 19:10:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:54.907 19:10:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.907 19:10:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.907 [ 00:11:54.907 { 00:11:54.907 "name": "BaseBdev2", 00:11:54.907 "aliases": [ 00:11:54.907 "82bb4891-996b-4aab-bee6-6be2f51923fd" 00:11:54.907 ], 00:11:54.907 "product_name": "Malloc disk", 00:11:54.907 "block_size": 512, 00:11:54.907 "num_blocks": 65536, 00:11:54.907 "uuid": "82bb4891-996b-4aab-bee6-6be2f51923fd", 00:11:54.907 "assigned_rate_limits": { 00:11:54.907 "rw_ios_per_sec": 0, 00:11:54.907 "rw_mbytes_per_sec": 0, 00:11:54.907 "r_mbytes_per_sec": 0, 00:11:54.907 "w_mbytes_per_sec": 0 00:11:54.907 }, 00:11:54.907 "claimed": true, 00:11:54.907 "claim_type": "exclusive_write", 00:11:54.907 "zoned": false, 00:11:54.907 "supported_io_types": { 00:11:54.907 "read": true, 00:11:54.907 "write": true, 00:11:54.907 "unmap": true, 00:11:54.907 "flush": true, 00:11:54.907 "reset": true, 00:11:54.907 "nvme_admin": false, 00:11:54.907 "nvme_io": false, 00:11:54.907 "nvme_io_md": false, 00:11:54.907 "write_zeroes": true, 00:11:54.907 "zcopy": true, 00:11:54.907 "get_zone_info": false, 00:11:54.907 "zone_management": false, 00:11:54.907 "zone_append": false, 00:11:54.907 "compare": false, 00:11:54.907 "compare_and_write": false, 00:11:54.907 "abort": true, 00:11:54.907 "seek_hole": false, 00:11:54.907 "seek_data": false, 00:11:54.908 "copy": true, 00:11:54.908 "nvme_iov_md": false 00:11:54.908 }, 00:11:54.908 "memory_domains": [ 00:11:54.908 { 00:11:54.908 "dma_device_id": "system", 00:11:54.908 "dma_device_type": 1 00:11:54.908 }, 00:11:54.908 { 00:11:54.908 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:54.908 "dma_device_type": 2 00:11:54.908 } 00:11:54.908 ], 00:11:54.908 "driver_specific": {} 00:11:54.908 } 00:11:54.908 ] 00:11:54.908 19:10:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.908 19:10:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:54.908 19:10:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:54.908 19:10:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:54.908 19:10:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:54.908 19:10:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:54.908 19:10:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:54.908 19:10:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:54.908 19:10:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:54.908 19:10:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:54.908 19:10:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:54.908 19:10:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:54.908 19:10:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:54.908 19:10:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:54.908 19:10:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:54.908 19:10:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.908 19:10:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.908 19:10:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:54.908 19:10:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.168 19:10:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:55.168 "name": "Existed_Raid", 00:11:55.168 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:55.168 "strip_size_kb": 64, 00:11:55.168 "state": "configuring", 00:11:55.168 "raid_level": "concat", 00:11:55.168 "superblock": false, 00:11:55.168 "num_base_bdevs": 4, 00:11:55.168 "num_base_bdevs_discovered": 2, 00:11:55.168 "num_base_bdevs_operational": 4, 00:11:55.168 "base_bdevs_list": [ 00:11:55.168 { 00:11:55.168 "name": "BaseBdev1", 00:11:55.168 "uuid": "bf209c78-264c-4100-95b9-bd5186fdf58e", 00:11:55.168 "is_configured": true, 00:11:55.168 "data_offset": 0, 00:11:55.168 "data_size": 65536 00:11:55.168 }, 00:11:55.168 { 00:11:55.168 "name": "BaseBdev2", 00:11:55.168 "uuid": "82bb4891-996b-4aab-bee6-6be2f51923fd", 00:11:55.168 "is_configured": true, 00:11:55.168 "data_offset": 0, 00:11:55.168 "data_size": 65536 00:11:55.168 }, 00:11:55.168 { 00:11:55.168 "name": "BaseBdev3", 00:11:55.168 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:55.168 "is_configured": false, 00:11:55.168 "data_offset": 0, 00:11:55.168 "data_size": 0 00:11:55.168 }, 00:11:55.168 { 00:11:55.168 "name": "BaseBdev4", 00:11:55.168 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:55.168 "is_configured": false, 00:11:55.168 "data_offset": 0, 00:11:55.168 "data_size": 0 00:11:55.168 } 00:11:55.168 ] 00:11:55.168 }' 00:11:55.168 19:10:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:55.168 19:10:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.428 19:10:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:55.428 19:10:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.428 19:10:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.428 [2024-11-27 19:10:05.018401] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:55.428 BaseBdev3 00:11:55.428 19:10:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.428 19:10:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:55.428 19:10:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:55.428 19:10:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:55.428 19:10:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:55.428 19:10:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:55.428 19:10:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:55.428 19:10:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:55.428 19:10:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.428 19:10:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.428 19:10:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.428 19:10:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:55.428 19:10:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.428 19:10:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.428 [ 00:11:55.428 { 00:11:55.428 "name": "BaseBdev3", 00:11:55.428 "aliases": [ 00:11:55.428 "4986d094-8c8e-4d81-9966-b3a71ecdfac7" 00:11:55.428 ], 00:11:55.428 "product_name": "Malloc disk", 00:11:55.428 "block_size": 512, 00:11:55.428 "num_blocks": 65536, 00:11:55.428 "uuid": "4986d094-8c8e-4d81-9966-b3a71ecdfac7", 00:11:55.428 "assigned_rate_limits": { 00:11:55.428 "rw_ios_per_sec": 0, 00:11:55.428 "rw_mbytes_per_sec": 0, 00:11:55.428 "r_mbytes_per_sec": 0, 00:11:55.428 "w_mbytes_per_sec": 0 00:11:55.428 }, 00:11:55.428 "claimed": true, 00:11:55.428 "claim_type": "exclusive_write", 00:11:55.428 "zoned": false, 00:11:55.428 "supported_io_types": { 00:11:55.428 "read": true, 00:11:55.428 "write": true, 00:11:55.428 "unmap": true, 00:11:55.428 "flush": true, 00:11:55.428 "reset": true, 00:11:55.428 "nvme_admin": false, 00:11:55.428 "nvme_io": false, 00:11:55.428 "nvme_io_md": false, 00:11:55.428 "write_zeroes": true, 00:11:55.428 "zcopy": true, 00:11:55.428 "get_zone_info": false, 00:11:55.428 "zone_management": false, 00:11:55.428 "zone_append": false, 00:11:55.428 "compare": false, 00:11:55.428 "compare_and_write": false, 00:11:55.428 "abort": true, 00:11:55.428 "seek_hole": false, 00:11:55.428 "seek_data": false, 00:11:55.428 "copy": true, 00:11:55.428 "nvme_iov_md": false 00:11:55.428 }, 00:11:55.428 "memory_domains": [ 00:11:55.428 { 00:11:55.428 "dma_device_id": "system", 00:11:55.428 "dma_device_type": 1 00:11:55.428 }, 00:11:55.428 { 00:11:55.428 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:55.428 "dma_device_type": 2 00:11:55.428 } 00:11:55.428 ], 00:11:55.428 "driver_specific": {} 00:11:55.428 } 00:11:55.428 ] 00:11:55.428 19:10:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.429 19:10:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:55.429 19:10:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:55.429 19:10:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:55.429 19:10:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:55.429 19:10:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:55.429 19:10:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:55.429 19:10:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:55.429 19:10:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:55.429 19:10:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:55.429 19:10:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:55.429 19:10:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:55.429 19:10:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:55.429 19:10:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:55.689 19:10:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:55.689 19:10:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:55.689 19:10:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.689 19:10:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.689 19:10:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.689 19:10:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:55.689 "name": "Existed_Raid", 00:11:55.689 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:55.689 "strip_size_kb": 64, 00:11:55.689 "state": "configuring", 00:11:55.689 "raid_level": "concat", 00:11:55.689 "superblock": false, 00:11:55.689 "num_base_bdevs": 4, 00:11:55.689 "num_base_bdevs_discovered": 3, 00:11:55.689 "num_base_bdevs_operational": 4, 00:11:55.689 "base_bdevs_list": [ 00:11:55.689 { 00:11:55.689 "name": "BaseBdev1", 00:11:55.689 "uuid": "bf209c78-264c-4100-95b9-bd5186fdf58e", 00:11:55.689 "is_configured": true, 00:11:55.689 "data_offset": 0, 00:11:55.689 "data_size": 65536 00:11:55.689 }, 00:11:55.689 { 00:11:55.689 "name": "BaseBdev2", 00:11:55.689 "uuid": "82bb4891-996b-4aab-bee6-6be2f51923fd", 00:11:55.689 "is_configured": true, 00:11:55.689 "data_offset": 0, 00:11:55.689 "data_size": 65536 00:11:55.689 }, 00:11:55.689 { 00:11:55.689 "name": "BaseBdev3", 00:11:55.689 "uuid": "4986d094-8c8e-4d81-9966-b3a71ecdfac7", 00:11:55.689 "is_configured": true, 00:11:55.689 "data_offset": 0, 00:11:55.689 "data_size": 65536 00:11:55.689 }, 00:11:55.689 { 00:11:55.689 "name": "BaseBdev4", 00:11:55.689 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:55.689 "is_configured": false, 00:11:55.689 "data_offset": 0, 00:11:55.689 "data_size": 0 00:11:55.689 } 00:11:55.689 ] 00:11:55.689 }' 00:11:55.689 19:10:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:55.689 19:10:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.949 19:10:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:55.949 19:10:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.949 19:10:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.949 [2024-11-27 19:10:05.530944] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:55.949 [2024-11-27 19:10:05.531003] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:55.949 [2024-11-27 19:10:05.531011] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:11:55.949 [2024-11-27 19:10:05.531348] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:55.949 [2024-11-27 19:10:05.531539] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:55.949 [2024-11-27 19:10:05.531559] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:55.949 [2024-11-27 19:10:05.531841] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:55.949 BaseBdev4 00:11:55.949 19:10:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.949 19:10:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:11:55.949 19:10:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:55.949 19:10:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:55.949 19:10:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:55.949 19:10:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:55.949 19:10:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:55.949 19:10:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:55.949 19:10:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.949 19:10:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.950 19:10:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.950 19:10:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:55.950 19:10:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.950 19:10:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.950 [ 00:11:55.950 { 00:11:55.950 "name": "BaseBdev4", 00:11:55.950 "aliases": [ 00:11:55.950 "65956792-125f-4bea-a768-8750dbc87038" 00:11:55.950 ], 00:11:55.950 "product_name": "Malloc disk", 00:11:55.950 "block_size": 512, 00:11:55.950 "num_blocks": 65536, 00:11:55.950 "uuid": "65956792-125f-4bea-a768-8750dbc87038", 00:11:55.950 "assigned_rate_limits": { 00:11:55.950 "rw_ios_per_sec": 0, 00:11:55.950 "rw_mbytes_per_sec": 0, 00:11:55.950 "r_mbytes_per_sec": 0, 00:11:55.950 "w_mbytes_per_sec": 0 00:11:55.950 }, 00:11:55.950 "claimed": true, 00:11:55.950 "claim_type": "exclusive_write", 00:11:55.950 "zoned": false, 00:11:55.950 "supported_io_types": { 00:11:55.950 "read": true, 00:11:55.950 "write": true, 00:11:55.950 "unmap": true, 00:11:55.950 "flush": true, 00:11:55.950 "reset": true, 00:11:55.950 "nvme_admin": false, 00:11:55.950 "nvme_io": false, 00:11:55.950 "nvme_io_md": false, 00:11:55.950 "write_zeroes": true, 00:11:55.950 "zcopy": true, 00:11:55.950 "get_zone_info": false, 00:11:55.950 "zone_management": false, 00:11:55.950 "zone_append": false, 00:11:55.950 "compare": false, 00:11:55.950 "compare_and_write": false, 00:11:55.950 "abort": true, 00:11:55.950 "seek_hole": false, 00:11:55.950 "seek_data": false, 00:11:55.950 "copy": true, 00:11:55.950 "nvme_iov_md": false 00:11:55.950 }, 00:11:55.950 "memory_domains": [ 00:11:55.950 { 00:11:55.950 "dma_device_id": "system", 00:11:55.950 "dma_device_type": 1 00:11:55.950 }, 00:11:55.950 { 00:11:55.950 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:55.950 "dma_device_type": 2 00:11:55.950 } 00:11:55.950 ], 00:11:55.950 "driver_specific": {} 00:11:55.950 } 00:11:55.950 ] 00:11:55.950 19:10:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.950 19:10:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:55.950 19:10:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:55.950 19:10:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:55.950 19:10:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:11:55.950 19:10:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:55.950 19:10:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:55.950 19:10:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:55.950 19:10:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:55.950 19:10:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:55.950 19:10:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:55.950 19:10:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:55.950 19:10:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:55.950 19:10:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:55.950 19:10:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:55.950 19:10:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:55.950 19:10:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.950 19:10:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.210 19:10:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.210 19:10:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:56.210 "name": "Existed_Raid", 00:11:56.210 "uuid": "deef64ed-d3de-4654-a94e-d2a9b187540a", 00:11:56.210 "strip_size_kb": 64, 00:11:56.210 "state": "online", 00:11:56.210 "raid_level": "concat", 00:11:56.210 "superblock": false, 00:11:56.210 "num_base_bdevs": 4, 00:11:56.210 "num_base_bdevs_discovered": 4, 00:11:56.210 "num_base_bdevs_operational": 4, 00:11:56.210 "base_bdevs_list": [ 00:11:56.210 { 00:11:56.210 "name": "BaseBdev1", 00:11:56.210 "uuid": "bf209c78-264c-4100-95b9-bd5186fdf58e", 00:11:56.210 "is_configured": true, 00:11:56.210 "data_offset": 0, 00:11:56.210 "data_size": 65536 00:11:56.210 }, 00:11:56.210 { 00:11:56.210 "name": "BaseBdev2", 00:11:56.210 "uuid": "82bb4891-996b-4aab-bee6-6be2f51923fd", 00:11:56.210 "is_configured": true, 00:11:56.210 "data_offset": 0, 00:11:56.210 "data_size": 65536 00:11:56.210 }, 00:11:56.210 { 00:11:56.210 "name": "BaseBdev3", 00:11:56.210 "uuid": "4986d094-8c8e-4d81-9966-b3a71ecdfac7", 00:11:56.210 "is_configured": true, 00:11:56.210 "data_offset": 0, 00:11:56.210 "data_size": 65536 00:11:56.210 }, 00:11:56.210 { 00:11:56.210 "name": "BaseBdev4", 00:11:56.210 "uuid": "65956792-125f-4bea-a768-8750dbc87038", 00:11:56.210 "is_configured": true, 00:11:56.210 "data_offset": 0, 00:11:56.210 "data_size": 65536 00:11:56.210 } 00:11:56.210 ] 00:11:56.210 }' 00:11:56.210 19:10:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:56.210 19:10:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.470 19:10:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:56.470 19:10:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:56.470 19:10:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:56.470 19:10:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:56.470 19:10:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:56.470 19:10:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:56.470 19:10:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:56.470 19:10:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:56.470 19:10:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.470 19:10:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.470 [2024-11-27 19:10:06.018526] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:56.470 19:10:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.470 19:10:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:56.470 "name": "Existed_Raid", 00:11:56.470 "aliases": [ 00:11:56.470 "deef64ed-d3de-4654-a94e-d2a9b187540a" 00:11:56.470 ], 00:11:56.470 "product_name": "Raid Volume", 00:11:56.470 "block_size": 512, 00:11:56.470 "num_blocks": 262144, 00:11:56.470 "uuid": "deef64ed-d3de-4654-a94e-d2a9b187540a", 00:11:56.470 "assigned_rate_limits": { 00:11:56.470 "rw_ios_per_sec": 0, 00:11:56.470 "rw_mbytes_per_sec": 0, 00:11:56.470 "r_mbytes_per_sec": 0, 00:11:56.470 "w_mbytes_per_sec": 0 00:11:56.470 }, 00:11:56.470 "claimed": false, 00:11:56.470 "zoned": false, 00:11:56.470 "supported_io_types": { 00:11:56.470 "read": true, 00:11:56.470 "write": true, 00:11:56.470 "unmap": true, 00:11:56.470 "flush": true, 00:11:56.470 "reset": true, 00:11:56.470 "nvme_admin": false, 00:11:56.470 "nvme_io": false, 00:11:56.470 "nvme_io_md": false, 00:11:56.470 "write_zeroes": true, 00:11:56.470 "zcopy": false, 00:11:56.470 "get_zone_info": false, 00:11:56.470 "zone_management": false, 00:11:56.470 "zone_append": false, 00:11:56.470 "compare": false, 00:11:56.470 "compare_and_write": false, 00:11:56.470 "abort": false, 00:11:56.470 "seek_hole": false, 00:11:56.470 "seek_data": false, 00:11:56.470 "copy": false, 00:11:56.470 "nvme_iov_md": false 00:11:56.470 }, 00:11:56.470 "memory_domains": [ 00:11:56.470 { 00:11:56.470 "dma_device_id": "system", 00:11:56.470 "dma_device_type": 1 00:11:56.470 }, 00:11:56.470 { 00:11:56.470 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:56.470 "dma_device_type": 2 00:11:56.470 }, 00:11:56.470 { 00:11:56.470 "dma_device_id": "system", 00:11:56.470 "dma_device_type": 1 00:11:56.470 }, 00:11:56.470 { 00:11:56.470 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:56.470 "dma_device_type": 2 00:11:56.470 }, 00:11:56.470 { 00:11:56.470 "dma_device_id": "system", 00:11:56.470 "dma_device_type": 1 00:11:56.470 }, 00:11:56.470 { 00:11:56.470 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:56.470 "dma_device_type": 2 00:11:56.470 }, 00:11:56.470 { 00:11:56.470 "dma_device_id": "system", 00:11:56.470 "dma_device_type": 1 00:11:56.470 }, 00:11:56.470 { 00:11:56.470 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:56.470 "dma_device_type": 2 00:11:56.470 } 00:11:56.470 ], 00:11:56.470 "driver_specific": { 00:11:56.470 "raid": { 00:11:56.470 "uuid": "deef64ed-d3de-4654-a94e-d2a9b187540a", 00:11:56.470 "strip_size_kb": 64, 00:11:56.470 "state": "online", 00:11:56.470 "raid_level": "concat", 00:11:56.470 "superblock": false, 00:11:56.470 "num_base_bdevs": 4, 00:11:56.470 "num_base_bdevs_discovered": 4, 00:11:56.470 "num_base_bdevs_operational": 4, 00:11:56.470 "base_bdevs_list": [ 00:11:56.470 { 00:11:56.470 "name": "BaseBdev1", 00:11:56.470 "uuid": "bf209c78-264c-4100-95b9-bd5186fdf58e", 00:11:56.470 "is_configured": true, 00:11:56.470 "data_offset": 0, 00:11:56.470 "data_size": 65536 00:11:56.470 }, 00:11:56.470 { 00:11:56.470 "name": "BaseBdev2", 00:11:56.470 "uuid": "82bb4891-996b-4aab-bee6-6be2f51923fd", 00:11:56.470 "is_configured": true, 00:11:56.470 "data_offset": 0, 00:11:56.470 "data_size": 65536 00:11:56.470 }, 00:11:56.470 { 00:11:56.470 "name": "BaseBdev3", 00:11:56.470 "uuid": "4986d094-8c8e-4d81-9966-b3a71ecdfac7", 00:11:56.470 "is_configured": true, 00:11:56.470 "data_offset": 0, 00:11:56.470 "data_size": 65536 00:11:56.470 }, 00:11:56.470 { 00:11:56.470 "name": "BaseBdev4", 00:11:56.470 "uuid": "65956792-125f-4bea-a768-8750dbc87038", 00:11:56.470 "is_configured": true, 00:11:56.470 "data_offset": 0, 00:11:56.470 "data_size": 65536 00:11:56.470 } 00:11:56.470 ] 00:11:56.470 } 00:11:56.470 } 00:11:56.470 }' 00:11:56.470 19:10:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:56.470 19:10:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:56.470 BaseBdev2 00:11:56.470 BaseBdev3 00:11:56.470 BaseBdev4' 00:11:56.730 19:10:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:56.730 19:10:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:56.730 19:10:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:56.730 19:10:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:56.730 19:10:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:56.730 19:10:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.730 19:10:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.731 19:10:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.731 19:10:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:56.731 19:10:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:56.731 19:10:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:56.731 19:10:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:56.731 19:10:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:56.731 19:10:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.731 19:10:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.731 19:10:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.731 19:10:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:56.731 19:10:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:56.731 19:10:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:56.731 19:10:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:56.731 19:10:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.731 19:10:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.731 19:10:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:56.731 19:10:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.731 19:10:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:56.731 19:10:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:56.731 19:10:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:56.731 19:10:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:56.731 19:10:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.731 19:10:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.731 19:10:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:56.731 19:10:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.731 19:10:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:56.731 19:10:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:56.731 19:10:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:56.731 19:10:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.731 19:10:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.731 [2024-11-27 19:10:06.329676] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:56.731 [2024-11-27 19:10:06.329791] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:56.731 [2024-11-27 19:10:06.329861] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:56.991 19:10:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.991 19:10:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:56.991 19:10:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:11:56.991 19:10:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:56.991 19:10:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:56.991 19:10:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:11:56.991 19:10:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:11:56.991 19:10:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:56.991 19:10:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:11:56.991 19:10:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:56.991 19:10:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:56.991 19:10:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:56.991 19:10:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:56.991 19:10:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:56.991 19:10:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:56.991 19:10:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:56.991 19:10:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:56.991 19:10:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:56.991 19:10:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.991 19:10:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.991 19:10:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.991 19:10:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:56.991 "name": "Existed_Raid", 00:11:56.991 "uuid": "deef64ed-d3de-4654-a94e-d2a9b187540a", 00:11:56.991 "strip_size_kb": 64, 00:11:56.991 "state": "offline", 00:11:56.991 "raid_level": "concat", 00:11:56.991 "superblock": false, 00:11:56.991 "num_base_bdevs": 4, 00:11:56.991 "num_base_bdevs_discovered": 3, 00:11:56.991 "num_base_bdevs_operational": 3, 00:11:56.991 "base_bdevs_list": [ 00:11:56.991 { 00:11:56.991 "name": null, 00:11:56.991 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:56.991 "is_configured": false, 00:11:56.991 "data_offset": 0, 00:11:56.991 "data_size": 65536 00:11:56.991 }, 00:11:56.991 { 00:11:56.991 "name": "BaseBdev2", 00:11:56.991 "uuid": "82bb4891-996b-4aab-bee6-6be2f51923fd", 00:11:56.991 "is_configured": true, 00:11:56.991 "data_offset": 0, 00:11:56.991 "data_size": 65536 00:11:56.991 }, 00:11:56.991 { 00:11:56.991 "name": "BaseBdev3", 00:11:56.991 "uuid": "4986d094-8c8e-4d81-9966-b3a71ecdfac7", 00:11:56.991 "is_configured": true, 00:11:56.991 "data_offset": 0, 00:11:56.991 "data_size": 65536 00:11:56.991 }, 00:11:56.991 { 00:11:56.991 "name": "BaseBdev4", 00:11:56.991 "uuid": "65956792-125f-4bea-a768-8750dbc87038", 00:11:56.991 "is_configured": true, 00:11:56.991 "data_offset": 0, 00:11:56.991 "data_size": 65536 00:11:56.991 } 00:11:56.991 ] 00:11:56.991 }' 00:11:56.991 19:10:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:56.991 19:10:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.251 19:10:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:57.251 19:10:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:57.251 19:10:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:57.251 19:10:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.251 19:10:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:57.251 19:10:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.251 19:10:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.511 19:10:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:57.511 19:10:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:57.511 19:10:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:57.511 19:10:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.511 19:10:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.511 [2024-11-27 19:10:06.900897] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:57.511 19:10:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.511 19:10:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:57.511 19:10:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:57.511 19:10:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:57.511 19:10:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:57.511 19:10:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.511 19:10:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.511 19:10:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.511 19:10:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:57.511 19:10:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:57.511 19:10:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:57.511 19:10:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.511 19:10:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.511 [2024-11-27 19:10:07.043961] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:57.771 19:10:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.771 19:10:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:57.771 19:10:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:57.771 19:10:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:57.771 19:10:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:57.771 19:10:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.771 19:10:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.771 19:10:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.771 19:10:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:57.771 19:10:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:57.771 19:10:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:11:57.771 19:10:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.771 19:10:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.771 [2024-11-27 19:10:07.204891] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:11:57.771 [2024-11-27 19:10:07.204950] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:57.771 19:10:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.771 19:10:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:57.771 19:10:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:57.771 19:10:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:57.771 19:10:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:57.771 19:10:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.771 19:10:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.771 19:10:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.771 19:10:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:57.771 19:10:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:57.771 19:10:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:11:57.771 19:10:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:57.771 19:10:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:57.771 19:10:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:57.771 19:10:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.771 19:10:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.031 BaseBdev2 00:11:58.031 19:10:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.031 19:10:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:58.031 19:10:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:58.031 19:10:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:58.031 19:10:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:58.031 19:10:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:58.031 19:10:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:58.031 19:10:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:58.031 19:10:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.031 19:10:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.031 19:10:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.031 19:10:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:58.031 19:10:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.031 19:10:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.031 [ 00:11:58.031 { 00:11:58.031 "name": "BaseBdev2", 00:11:58.031 "aliases": [ 00:11:58.031 "4f37c7ad-5601-48c6-96c5-6cb1e1e991eb" 00:11:58.031 ], 00:11:58.031 "product_name": "Malloc disk", 00:11:58.031 "block_size": 512, 00:11:58.031 "num_blocks": 65536, 00:11:58.031 "uuid": "4f37c7ad-5601-48c6-96c5-6cb1e1e991eb", 00:11:58.031 "assigned_rate_limits": { 00:11:58.031 "rw_ios_per_sec": 0, 00:11:58.031 "rw_mbytes_per_sec": 0, 00:11:58.031 "r_mbytes_per_sec": 0, 00:11:58.031 "w_mbytes_per_sec": 0 00:11:58.031 }, 00:11:58.031 "claimed": false, 00:11:58.031 "zoned": false, 00:11:58.031 "supported_io_types": { 00:11:58.031 "read": true, 00:11:58.031 "write": true, 00:11:58.031 "unmap": true, 00:11:58.031 "flush": true, 00:11:58.031 "reset": true, 00:11:58.031 "nvme_admin": false, 00:11:58.031 "nvme_io": false, 00:11:58.031 "nvme_io_md": false, 00:11:58.031 "write_zeroes": true, 00:11:58.031 "zcopy": true, 00:11:58.031 "get_zone_info": false, 00:11:58.031 "zone_management": false, 00:11:58.031 "zone_append": false, 00:11:58.031 "compare": false, 00:11:58.031 "compare_and_write": false, 00:11:58.031 "abort": true, 00:11:58.031 "seek_hole": false, 00:11:58.031 "seek_data": false, 00:11:58.031 "copy": true, 00:11:58.031 "nvme_iov_md": false 00:11:58.031 }, 00:11:58.031 "memory_domains": [ 00:11:58.031 { 00:11:58.031 "dma_device_id": "system", 00:11:58.031 "dma_device_type": 1 00:11:58.031 }, 00:11:58.031 { 00:11:58.031 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:58.031 "dma_device_type": 2 00:11:58.031 } 00:11:58.031 ], 00:11:58.031 "driver_specific": {} 00:11:58.031 } 00:11:58.031 ] 00:11:58.031 19:10:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.032 19:10:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:58.032 19:10:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:58.032 19:10:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:58.032 19:10:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:58.032 19:10:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.032 19:10:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.032 BaseBdev3 00:11:58.032 19:10:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.032 19:10:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:58.032 19:10:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:58.032 19:10:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:58.032 19:10:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:58.032 19:10:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:58.032 19:10:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:58.032 19:10:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:58.032 19:10:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.032 19:10:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.032 19:10:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.032 19:10:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:58.032 19:10:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.032 19:10:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.032 [ 00:11:58.032 { 00:11:58.032 "name": "BaseBdev3", 00:11:58.032 "aliases": [ 00:11:58.032 "2f9c47b5-7c7f-4dd8-bd89-6beb72d4a60d" 00:11:58.032 ], 00:11:58.032 "product_name": "Malloc disk", 00:11:58.032 "block_size": 512, 00:11:58.032 "num_blocks": 65536, 00:11:58.032 "uuid": "2f9c47b5-7c7f-4dd8-bd89-6beb72d4a60d", 00:11:58.032 "assigned_rate_limits": { 00:11:58.032 "rw_ios_per_sec": 0, 00:11:58.032 "rw_mbytes_per_sec": 0, 00:11:58.032 "r_mbytes_per_sec": 0, 00:11:58.032 "w_mbytes_per_sec": 0 00:11:58.032 }, 00:11:58.032 "claimed": false, 00:11:58.032 "zoned": false, 00:11:58.032 "supported_io_types": { 00:11:58.032 "read": true, 00:11:58.032 "write": true, 00:11:58.032 "unmap": true, 00:11:58.032 "flush": true, 00:11:58.032 "reset": true, 00:11:58.032 "nvme_admin": false, 00:11:58.032 "nvme_io": false, 00:11:58.032 "nvme_io_md": false, 00:11:58.032 "write_zeroes": true, 00:11:58.032 "zcopy": true, 00:11:58.032 "get_zone_info": false, 00:11:58.032 "zone_management": false, 00:11:58.032 "zone_append": false, 00:11:58.032 "compare": false, 00:11:58.032 "compare_and_write": false, 00:11:58.032 "abort": true, 00:11:58.032 "seek_hole": false, 00:11:58.032 "seek_data": false, 00:11:58.032 "copy": true, 00:11:58.032 "nvme_iov_md": false 00:11:58.032 }, 00:11:58.032 "memory_domains": [ 00:11:58.032 { 00:11:58.032 "dma_device_id": "system", 00:11:58.032 "dma_device_type": 1 00:11:58.032 }, 00:11:58.032 { 00:11:58.032 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:58.032 "dma_device_type": 2 00:11:58.032 } 00:11:58.032 ], 00:11:58.032 "driver_specific": {} 00:11:58.032 } 00:11:58.032 ] 00:11:58.032 19:10:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.032 19:10:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:58.032 19:10:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:58.032 19:10:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:58.032 19:10:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:58.032 19:10:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.032 19:10:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.032 BaseBdev4 00:11:58.032 19:10:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.032 19:10:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:11:58.032 19:10:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:58.032 19:10:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:58.032 19:10:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:58.032 19:10:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:58.032 19:10:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:58.032 19:10:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:58.032 19:10:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.032 19:10:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.032 19:10:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.032 19:10:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:58.032 19:10:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.032 19:10:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.032 [ 00:11:58.032 { 00:11:58.032 "name": "BaseBdev4", 00:11:58.032 "aliases": [ 00:11:58.032 "16b756b6-c039-495b-90f0-861086a2f5cf" 00:11:58.032 ], 00:11:58.032 "product_name": "Malloc disk", 00:11:58.032 "block_size": 512, 00:11:58.032 "num_blocks": 65536, 00:11:58.032 "uuid": "16b756b6-c039-495b-90f0-861086a2f5cf", 00:11:58.032 "assigned_rate_limits": { 00:11:58.032 "rw_ios_per_sec": 0, 00:11:58.032 "rw_mbytes_per_sec": 0, 00:11:58.032 "r_mbytes_per_sec": 0, 00:11:58.032 "w_mbytes_per_sec": 0 00:11:58.032 }, 00:11:58.032 "claimed": false, 00:11:58.032 "zoned": false, 00:11:58.032 "supported_io_types": { 00:11:58.032 "read": true, 00:11:58.032 "write": true, 00:11:58.032 "unmap": true, 00:11:58.032 "flush": true, 00:11:58.032 "reset": true, 00:11:58.032 "nvme_admin": false, 00:11:58.032 "nvme_io": false, 00:11:58.032 "nvme_io_md": false, 00:11:58.032 "write_zeroes": true, 00:11:58.032 "zcopy": true, 00:11:58.032 "get_zone_info": false, 00:11:58.032 "zone_management": false, 00:11:58.032 "zone_append": false, 00:11:58.032 "compare": false, 00:11:58.032 "compare_and_write": false, 00:11:58.032 "abort": true, 00:11:58.032 "seek_hole": false, 00:11:58.032 "seek_data": false, 00:11:58.032 "copy": true, 00:11:58.032 "nvme_iov_md": false 00:11:58.032 }, 00:11:58.032 "memory_domains": [ 00:11:58.032 { 00:11:58.032 "dma_device_id": "system", 00:11:58.032 "dma_device_type": 1 00:11:58.032 }, 00:11:58.032 { 00:11:58.032 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:58.032 "dma_device_type": 2 00:11:58.032 } 00:11:58.032 ], 00:11:58.032 "driver_specific": {} 00:11:58.032 } 00:11:58.032 ] 00:11:58.032 19:10:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.032 19:10:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:58.032 19:10:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:58.032 19:10:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:58.032 19:10:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:58.032 19:10:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.032 19:10:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.032 [2024-11-27 19:10:07.618313] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:58.032 [2024-11-27 19:10:07.618403] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:58.032 [2024-11-27 19:10:07.618446] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:58.032 [2024-11-27 19:10:07.620621] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:58.032 [2024-11-27 19:10:07.620725] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:58.032 19:10:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.032 19:10:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:58.032 19:10:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:58.032 19:10:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:58.032 19:10:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:58.032 19:10:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:58.032 19:10:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:58.032 19:10:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:58.032 19:10:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:58.032 19:10:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:58.032 19:10:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:58.032 19:10:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:58.032 19:10:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:58.032 19:10:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.032 19:10:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.032 19:10:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.292 19:10:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:58.292 "name": "Existed_Raid", 00:11:58.292 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:58.292 "strip_size_kb": 64, 00:11:58.292 "state": "configuring", 00:11:58.292 "raid_level": "concat", 00:11:58.292 "superblock": false, 00:11:58.292 "num_base_bdevs": 4, 00:11:58.292 "num_base_bdevs_discovered": 3, 00:11:58.292 "num_base_bdevs_operational": 4, 00:11:58.292 "base_bdevs_list": [ 00:11:58.292 { 00:11:58.292 "name": "BaseBdev1", 00:11:58.293 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:58.293 "is_configured": false, 00:11:58.293 "data_offset": 0, 00:11:58.293 "data_size": 0 00:11:58.293 }, 00:11:58.293 { 00:11:58.293 "name": "BaseBdev2", 00:11:58.293 "uuid": "4f37c7ad-5601-48c6-96c5-6cb1e1e991eb", 00:11:58.293 "is_configured": true, 00:11:58.293 "data_offset": 0, 00:11:58.293 "data_size": 65536 00:11:58.293 }, 00:11:58.293 { 00:11:58.293 "name": "BaseBdev3", 00:11:58.293 "uuid": "2f9c47b5-7c7f-4dd8-bd89-6beb72d4a60d", 00:11:58.293 "is_configured": true, 00:11:58.293 "data_offset": 0, 00:11:58.293 "data_size": 65536 00:11:58.293 }, 00:11:58.293 { 00:11:58.293 "name": "BaseBdev4", 00:11:58.293 "uuid": "16b756b6-c039-495b-90f0-861086a2f5cf", 00:11:58.293 "is_configured": true, 00:11:58.293 "data_offset": 0, 00:11:58.293 "data_size": 65536 00:11:58.293 } 00:11:58.293 ] 00:11:58.293 }' 00:11:58.293 19:10:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:58.293 19:10:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.553 19:10:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:58.553 19:10:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.553 19:10:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.553 [2024-11-27 19:10:08.061593] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:58.553 19:10:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.553 19:10:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:58.553 19:10:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:58.553 19:10:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:58.553 19:10:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:58.553 19:10:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:58.553 19:10:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:58.553 19:10:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:58.553 19:10:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:58.553 19:10:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:58.553 19:10:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:58.553 19:10:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:58.553 19:10:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:58.553 19:10:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.553 19:10:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.553 19:10:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.553 19:10:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:58.553 "name": "Existed_Raid", 00:11:58.553 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:58.553 "strip_size_kb": 64, 00:11:58.553 "state": "configuring", 00:11:58.553 "raid_level": "concat", 00:11:58.553 "superblock": false, 00:11:58.553 "num_base_bdevs": 4, 00:11:58.553 "num_base_bdevs_discovered": 2, 00:11:58.553 "num_base_bdevs_operational": 4, 00:11:58.553 "base_bdevs_list": [ 00:11:58.553 { 00:11:58.553 "name": "BaseBdev1", 00:11:58.553 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:58.553 "is_configured": false, 00:11:58.553 "data_offset": 0, 00:11:58.553 "data_size": 0 00:11:58.553 }, 00:11:58.553 { 00:11:58.553 "name": null, 00:11:58.553 "uuid": "4f37c7ad-5601-48c6-96c5-6cb1e1e991eb", 00:11:58.553 "is_configured": false, 00:11:58.553 "data_offset": 0, 00:11:58.553 "data_size": 65536 00:11:58.553 }, 00:11:58.553 { 00:11:58.553 "name": "BaseBdev3", 00:11:58.553 "uuid": "2f9c47b5-7c7f-4dd8-bd89-6beb72d4a60d", 00:11:58.553 "is_configured": true, 00:11:58.553 "data_offset": 0, 00:11:58.553 "data_size": 65536 00:11:58.553 }, 00:11:58.553 { 00:11:58.553 "name": "BaseBdev4", 00:11:58.553 "uuid": "16b756b6-c039-495b-90f0-861086a2f5cf", 00:11:58.553 "is_configured": true, 00:11:58.553 "data_offset": 0, 00:11:58.553 "data_size": 65536 00:11:58.553 } 00:11:58.553 ] 00:11:58.553 }' 00:11:58.553 19:10:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:58.553 19:10:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.123 19:10:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:59.123 19:10:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:59.123 19:10:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.123 19:10:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.123 19:10:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.123 19:10:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:59.123 19:10:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:59.123 19:10:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.123 19:10:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.123 [2024-11-27 19:10:08.571133] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:59.123 BaseBdev1 00:11:59.123 19:10:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.123 19:10:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:59.123 19:10:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:59.123 19:10:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:59.123 19:10:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:59.123 19:10:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:59.123 19:10:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:59.123 19:10:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:59.123 19:10:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.123 19:10:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.123 19:10:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.123 19:10:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:59.123 19:10:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.123 19:10:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.123 [ 00:11:59.123 { 00:11:59.123 "name": "BaseBdev1", 00:11:59.123 "aliases": [ 00:11:59.123 "0f78335b-2bdc-44ae-a72f-8dc1b7967c2a" 00:11:59.123 ], 00:11:59.123 "product_name": "Malloc disk", 00:11:59.123 "block_size": 512, 00:11:59.123 "num_blocks": 65536, 00:11:59.123 "uuid": "0f78335b-2bdc-44ae-a72f-8dc1b7967c2a", 00:11:59.123 "assigned_rate_limits": { 00:11:59.123 "rw_ios_per_sec": 0, 00:11:59.123 "rw_mbytes_per_sec": 0, 00:11:59.123 "r_mbytes_per_sec": 0, 00:11:59.123 "w_mbytes_per_sec": 0 00:11:59.123 }, 00:11:59.123 "claimed": true, 00:11:59.123 "claim_type": "exclusive_write", 00:11:59.123 "zoned": false, 00:11:59.123 "supported_io_types": { 00:11:59.123 "read": true, 00:11:59.123 "write": true, 00:11:59.123 "unmap": true, 00:11:59.123 "flush": true, 00:11:59.123 "reset": true, 00:11:59.123 "nvme_admin": false, 00:11:59.123 "nvme_io": false, 00:11:59.124 "nvme_io_md": false, 00:11:59.124 "write_zeroes": true, 00:11:59.124 "zcopy": true, 00:11:59.124 "get_zone_info": false, 00:11:59.124 "zone_management": false, 00:11:59.124 "zone_append": false, 00:11:59.124 "compare": false, 00:11:59.124 "compare_and_write": false, 00:11:59.124 "abort": true, 00:11:59.124 "seek_hole": false, 00:11:59.124 "seek_data": false, 00:11:59.124 "copy": true, 00:11:59.124 "nvme_iov_md": false 00:11:59.124 }, 00:11:59.124 "memory_domains": [ 00:11:59.124 { 00:11:59.124 "dma_device_id": "system", 00:11:59.124 "dma_device_type": 1 00:11:59.124 }, 00:11:59.124 { 00:11:59.124 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:59.124 "dma_device_type": 2 00:11:59.124 } 00:11:59.124 ], 00:11:59.124 "driver_specific": {} 00:11:59.124 } 00:11:59.124 ] 00:11:59.124 19:10:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.124 19:10:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:59.124 19:10:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:59.124 19:10:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:59.124 19:10:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:59.124 19:10:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:59.124 19:10:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:59.124 19:10:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:59.124 19:10:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:59.124 19:10:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:59.124 19:10:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:59.124 19:10:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:59.124 19:10:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:59.124 19:10:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.124 19:10:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.124 19:10:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:59.124 19:10:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.124 19:10:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:59.124 "name": "Existed_Raid", 00:11:59.124 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:59.124 "strip_size_kb": 64, 00:11:59.124 "state": "configuring", 00:11:59.124 "raid_level": "concat", 00:11:59.124 "superblock": false, 00:11:59.124 "num_base_bdevs": 4, 00:11:59.124 "num_base_bdevs_discovered": 3, 00:11:59.124 "num_base_bdevs_operational": 4, 00:11:59.124 "base_bdevs_list": [ 00:11:59.124 { 00:11:59.124 "name": "BaseBdev1", 00:11:59.124 "uuid": "0f78335b-2bdc-44ae-a72f-8dc1b7967c2a", 00:11:59.124 "is_configured": true, 00:11:59.124 "data_offset": 0, 00:11:59.124 "data_size": 65536 00:11:59.124 }, 00:11:59.124 { 00:11:59.124 "name": null, 00:11:59.124 "uuid": "4f37c7ad-5601-48c6-96c5-6cb1e1e991eb", 00:11:59.124 "is_configured": false, 00:11:59.124 "data_offset": 0, 00:11:59.124 "data_size": 65536 00:11:59.124 }, 00:11:59.124 { 00:11:59.124 "name": "BaseBdev3", 00:11:59.124 "uuid": "2f9c47b5-7c7f-4dd8-bd89-6beb72d4a60d", 00:11:59.124 "is_configured": true, 00:11:59.124 "data_offset": 0, 00:11:59.124 "data_size": 65536 00:11:59.124 }, 00:11:59.124 { 00:11:59.124 "name": "BaseBdev4", 00:11:59.124 "uuid": "16b756b6-c039-495b-90f0-861086a2f5cf", 00:11:59.124 "is_configured": true, 00:11:59.124 "data_offset": 0, 00:11:59.124 "data_size": 65536 00:11:59.124 } 00:11:59.124 ] 00:11:59.124 }' 00:11:59.124 19:10:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:59.124 19:10:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.694 19:10:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:59.694 19:10:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.694 19:10:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.694 19:10:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:59.694 19:10:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.694 19:10:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:59.694 19:10:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:59.694 19:10:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.694 19:10:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.694 [2024-11-27 19:10:09.106303] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:59.694 19:10:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.694 19:10:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:59.694 19:10:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:59.694 19:10:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:59.694 19:10:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:59.694 19:10:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:59.694 19:10:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:59.694 19:10:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:59.694 19:10:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:59.694 19:10:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:59.694 19:10:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:59.694 19:10:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:59.694 19:10:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:59.694 19:10:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.694 19:10:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.694 19:10:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.694 19:10:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:59.694 "name": "Existed_Raid", 00:11:59.694 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:59.694 "strip_size_kb": 64, 00:11:59.694 "state": "configuring", 00:11:59.694 "raid_level": "concat", 00:11:59.694 "superblock": false, 00:11:59.694 "num_base_bdevs": 4, 00:11:59.694 "num_base_bdevs_discovered": 2, 00:11:59.694 "num_base_bdevs_operational": 4, 00:11:59.694 "base_bdevs_list": [ 00:11:59.694 { 00:11:59.694 "name": "BaseBdev1", 00:11:59.694 "uuid": "0f78335b-2bdc-44ae-a72f-8dc1b7967c2a", 00:11:59.694 "is_configured": true, 00:11:59.694 "data_offset": 0, 00:11:59.695 "data_size": 65536 00:11:59.695 }, 00:11:59.695 { 00:11:59.695 "name": null, 00:11:59.695 "uuid": "4f37c7ad-5601-48c6-96c5-6cb1e1e991eb", 00:11:59.695 "is_configured": false, 00:11:59.695 "data_offset": 0, 00:11:59.695 "data_size": 65536 00:11:59.695 }, 00:11:59.695 { 00:11:59.695 "name": null, 00:11:59.695 "uuid": "2f9c47b5-7c7f-4dd8-bd89-6beb72d4a60d", 00:11:59.695 "is_configured": false, 00:11:59.695 "data_offset": 0, 00:11:59.695 "data_size": 65536 00:11:59.695 }, 00:11:59.695 { 00:11:59.695 "name": "BaseBdev4", 00:11:59.695 "uuid": "16b756b6-c039-495b-90f0-861086a2f5cf", 00:11:59.695 "is_configured": true, 00:11:59.695 "data_offset": 0, 00:11:59.695 "data_size": 65536 00:11:59.695 } 00:11:59.695 ] 00:11:59.695 }' 00:11:59.695 19:10:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:59.695 19:10:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.954 19:10:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:59.954 19:10:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:59.954 19:10:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.954 19:10:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.214 19:10:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.214 19:10:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:12:00.214 19:10:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:00.214 19:10:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.214 19:10:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.214 [2024-11-27 19:10:09.629410] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:00.214 19:10:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.214 19:10:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:00.214 19:10:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:00.214 19:10:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:00.214 19:10:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:00.214 19:10:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:00.214 19:10:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:00.214 19:10:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:00.214 19:10:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:00.214 19:10:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:00.214 19:10:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:00.214 19:10:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:00.214 19:10:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:00.214 19:10:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.214 19:10:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.214 19:10:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.214 19:10:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:00.214 "name": "Existed_Raid", 00:12:00.214 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:00.214 "strip_size_kb": 64, 00:12:00.214 "state": "configuring", 00:12:00.214 "raid_level": "concat", 00:12:00.214 "superblock": false, 00:12:00.214 "num_base_bdevs": 4, 00:12:00.214 "num_base_bdevs_discovered": 3, 00:12:00.214 "num_base_bdevs_operational": 4, 00:12:00.214 "base_bdevs_list": [ 00:12:00.214 { 00:12:00.214 "name": "BaseBdev1", 00:12:00.214 "uuid": "0f78335b-2bdc-44ae-a72f-8dc1b7967c2a", 00:12:00.214 "is_configured": true, 00:12:00.214 "data_offset": 0, 00:12:00.214 "data_size": 65536 00:12:00.214 }, 00:12:00.214 { 00:12:00.214 "name": null, 00:12:00.214 "uuid": "4f37c7ad-5601-48c6-96c5-6cb1e1e991eb", 00:12:00.214 "is_configured": false, 00:12:00.214 "data_offset": 0, 00:12:00.214 "data_size": 65536 00:12:00.214 }, 00:12:00.214 { 00:12:00.214 "name": "BaseBdev3", 00:12:00.214 "uuid": "2f9c47b5-7c7f-4dd8-bd89-6beb72d4a60d", 00:12:00.214 "is_configured": true, 00:12:00.214 "data_offset": 0, 00:12:00.214 "data_size": 65536 00:12:00.214 }, 00:12:00.214 { 00:12:00.214 "name": "BaseBdev4", 00:12:00.214 "uuid": "16b756b6-c039-495b-90f0-861086a2f5cf", 00:12:00.214 "is_configured": true, 00:12:00.214 "data_offset": 0, 00:12:00.214 "data_size": 65536 00:12:00.214 } 00:12:00.214 ] 00:12:00.214 }' 00:12:00.214 19:10:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:00.214 19:10:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.474 19:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:00.475 19:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:00.475 19:10:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.475 19:10:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.475 19:10:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.735 19:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:12:00.735 19:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:00.735 19:10:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.735 19:10:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.735 [2024-11-27 19:10:10.136609] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:00.735 19:10:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.735 19:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:00.735 19:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:00.735 19:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:00.735 19:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:00.735 19:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:00.735 19:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:00.735 19:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:00.735 19:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:00.735 19:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:00.735 19:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:00.735 19:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:00.735 19:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:00.736 19:10:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.736 19:10:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.736 19:10:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.736 19:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:00.736 "name": "Existed_Raid", 00:12:00.736 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:00.736 "strip_size_kb": 64, 00:12:00.736 "state": "configuring", 00:12:00.736 "raid_level": "concat", 00:12:00.736 "superblock": false, 00:12:00.736 "num_base_bdevs": 4, 00:12:00.736 "num_base_bdevs_discovered": 2, 00:12:00.736 "num_base_bdevs_operational": 4, 00:12:00.736 "base_bdevs_list": [ 00:12:00.736 { 00:12:00.736 "name": null, 00:12:00.736 "uuid": "0f78335b-2bdc-44ae-a72f-8dc1b7967c2a", 00:12:00.736 "is_configured": false, 00:12:00.736 "data_offset": 0, 00:12:00.736 "data_size": 65536 00:12:00.736 }, 00:12:00.736 { 00:12:00.736 "name": null, 00:12:00.736 "uuid": "4f37c7ad-5601-48c6-96c5-6cb1e1e991eb", 00:12:00.736 "is_configured": false, 00:12:00.736 "data_offset": 0, 00:12:00.736 "data_size": 65536 00:12:00.736 }, 00:12:00.736 { 00:12:00.736 "name": "BaseBdev3", 00:12:00.736 "uuid": "2f9c47b5-7c7f-4dd8-bd89-6beb72d4a60d", 00:12:00.736 "is_configured": true, 00:12:00.736 "data_offset": 0, 00:12:00.736 "data_size": 65536 00:12:00.736 }, 00:12:00.736 { 00:12:00.736 "name": "BaseBdev4", 00:12:00.736 "uuid": "16b756b6-c039-495b-90f0-861086a2f5cf", 00:12:00.736 "is_configured": true, 00:12:00.736 "data_offset": 0, 00:12:00.736 "data_size": 65536 00:12:00.736 } 00:12:00.736 ] 00:12:00.736 }' 00:12:00.736 19:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:00.736 19:10:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.306 19:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:01.306 19:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:01.306 19:10:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.306 19:10:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.306 19:10:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.306 19:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:12:01.306 19:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:01.306 19:10:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.306 19:10:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.306 [2024-11-27 19:10:10.703780] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:01.306 19:10:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.306 19:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:01.306 19:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:01.306 19:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:01.306 19:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:01.306 19:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:01.306 19:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:01.306 19:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:01.306 19:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:01.306 19:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:01.306 19:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:01.306 19:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:01.306 19:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:01.306 19:10:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.306 19:10:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.306 19:10:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.306 19:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:01.306 "name": "Existed_Raid", 00:12:01.306 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:01.306 "strip_size_kb": 64, 00:12:01.306 "state": "configuring", 00:12:01.306 "raid_level": "concat", 00:12:01.306 "superblock": false, 00:12:01.306 "num_base_bdevs": 4, 00:12:01.306 "num_base_bdevs_discovered": 3, 00:12:01.306 "num_base_bdevs_operational": 4, 00:12:01.306 "base_bdevs_list": [ 00:12:01.306 { 00:12:01.306 "name": null, 00:12:01.306 "uuid": "0f78335b-2bdc-44ae-a72f-8dc1b7967c2a", 00:12:01.307 "is_configured": false, 00:12:01.307 "data_offset": 0, 00:12:01.307 "data_size": 65536 00:12:01.307 }, 00:12:01.307 { 00:12:01.307 "name": "BaseBdev2", 00:12:01.307 "uuid": "4f37c7ad-5601-48c6-96c5-6cb1e1e991eb", 00:12:01.307 "is_configured": true, 00:12:01.307 "data_offset": 0, 00:12:01.307 "data_size": 65536 00:12:01.307 }, 00:12:01.307 { 00:12:01.307 "name": "BaseBdev3", 00:12:01.307 "uuid": "2f9c47b5-7c7f-4dd8-bd89-6beb72d4a60d", 00:12:01.307 "is_configured": true, 00:12:01.307 "data_offset": 0, 00:12:01.307 "data_size": 65536 00:12:01.307 }, 00:12:01.307 { 00:12:01.307 "name": "BaseBdev4", 00:12:01.307 "uuid": "16b756b6-c039-495b-90f0-861086a2f5cf", 00:12:01.307 "is_configured": true, 00:12:01.307 "data_offset": 0, 00:12:01.307 "data_size": 65536 00:12:01.307 } 00:12:01.307 ] 00:12:01.307 }' 00:12:01.307 19:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:01.307 19:10:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.566 19:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:01.566 19:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:01.566 19:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.566 19:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.566 19:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.826 19:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:12:01.826 19:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:01.826 19:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.826 19:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.826 19:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:01.826 19:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.826 19:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 0f78335b-2bdc-44ae-a72f-8dc1b7967c2a 00:12:01.826 19:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.826 19:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.826 [2024-11-27 19:10:11.303313] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:01.826 [2024-11-27 19:10:11.303395] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:01.826 [2024-11-27 19:10:11.303404] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:12:01.826 [2024-11-27 19:10:11.303751] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:12:01.826 [2024-11-27 19:10:11.303936] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:01.826 [2024-11-27 19:10:11.303949] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:12:01.826 [2024-11-27 19:10:11.304226] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:01.826 NewBaseBdev 00:12:01.826 19:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.826 19:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:12:01.826 19:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:12:01.826 19:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:01.826 19:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:01.827 19:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:01.827 19:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:01.827 19:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:01.827 19:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.827 19:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.827 19:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.827 19:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:01.827 19:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.827 19:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.827 [ 00:12:01.827 { 00:12:01.827 "name": "NewBaseBdev", 00:12:01.827 "aliases": [ 00:12:01.827 "0f78335b-2bdc-44ae-a72f-8dc1b7967c2a" 00:12:01.827 ], 00:12:01.827 "product_name": "Malloc disk", 00:12:01.827 "block_size": 512, 00:12:01.827 "num_blocks": 65536, 00:12:01.827 "uuid": "0f78335b-2bdc-44ae-a72f-8dc1b7967c2a", 00:12:01.827 "assigned_rate_limits": { 00:12:01.827 "rw_ios_per_sec": 0, 00:12:01.827 "rw_mbytes_per_sec": 0, 00:12:01.827 "r_mbytes_per_sec": 0, 00:12:01.827 "w_mbytes_per_sec": 0 00:12:01.827 }, 00:12:01.827 "claimed": true, 00:12:01.827 "claim_type": "exclusive_write", 00:12:01.827 "zoned": false, 00:12:01.827 "supported_io_types": { 00:12:01.827 "read": true, 00:12:01.827 "write": true, 00:12:01.827 "unmap": true, 00:12:01.827 "flush": true, 00:12:01.827 "reset": true, 00:12:01.827 "nvme_admin": false, 00:12:01.827 "nvme_io": false, 00:12:01.827 "nvme_io_md": false, 00:12:01.827 "write_zeroes": true, 00:12:01.827 "zcopy": true, 00:12:01.827 "get_zone_info": false, 00:12:01.827 "zone_management": false, 00:12:01.827 "zone_append": false, 00:12:01.827 "compare": false, 00:12:01.827 "compare_and_write": false, 00:12:01.827 "abort": true, 00:12:01.827 "seek_hole": false, 00:12:01.827 "seek_data": false, 00:12:01.827 "copy": true, 00:12:01.827 "nvme_iov_md": false 00:12:01.827 }, 00:12:01.827 "memory_domains": [ 00:12:01.827 { 00:12:01.827 "dma_device_id": "system", 00:12:01.827 "dma_device_type": 1 00:12:01.827 }, 00:12:01.827 { 00:12:01.827 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:01.827 "dma_device_type": 2 00:12:01.827 } 00:12:01.827 ], 00:12:01.827 "driver_specific": {} 00:12:01.827 } 00:12:01.827 ] 00:12:01.827 19:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.827 19:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:01.827 19:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:12:01.827 19:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:01.827 19:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:01.827 19:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:01.827 19:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:01.827 19:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:01.827 19:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:01.827 19:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:01.827 19:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:01.827 19:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:01.827 19:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:01.827 19:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:01.827 19:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.827 19:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.827 19:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.827 19:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:01.827 "name": "Existed_Raid", 00:12:01.827 "uuid": "729a5cb5-9a64-4805-a905-d4a039c7b2f1", 00:12:01.827 "strip_size_kb": 64, 00:12:01.827 "state": "online", 00:12:01.827 "raid_level": "concat", 00:12:01.827 "superblock": false, 00:12:01.827 "num_base_bdevs": 4, 00:12:01.827 "num_base_bdevs_discovered": 4, 00:12:01.827 "num_base_bdevs_operational": 4, 00:12:01.827 "base_bdevs_list": [ 00:12:01.827 { 00:12:01.827 "name": "NewBaseBdev", 00:12:01.827 "uuid": "0f78335b-2bdc-44ae-a72f-8dc1b7967c2a", 00:12:01.827 "is_configured": true, 00:12:01.827 "data_offset": 0, 00:12:01.827 "data_size": 65536 00:12:01.827 }, 00:12:01.827 { 00:12:01.827 "name": "BaseBdev2", 00:12:01.827 "uuid": "4f37c7ad-5601-48c6-96c5-6cb1e1e991eb", 00:12:01.827 "is_configured": true, 00:12:01.827 "data_offset": 0, 00:12:01.827 "data_size": 65536 00:12:01.827 }, 00:12:01.827 { 00:12:01.827 "name": "BaseBdev3", 00:12:01.827 "uuid": "2f9c47b5-7c7f-4dd8-bd89-6beb72d4a60d", 00:12:01.827 "is_configured": true, 00:12:01.827 "data_offset": 0, 00:12:01.827 "data_size": 65536 00:12:01.827 }, 00:12:01.827 { 00:12:01.827 "name": "BaseBdev4", 00:12:01.827 "uuid": "16b756b6-c039-495b-90f0-861086a2f5cf", 00:12:01.827 "is_configured": true, 00:12:01.827 "data_offset": 0, 00:12:01.827 "data_size": 65536 00:12:01.827 } 00:12:01.827 ] 00:12:01.827 }' 00:12:01.827 19:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:01.827 19:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.397 19:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:12:02.397 19:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:02.397 19:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:02.397 19:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:02.397 19:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:02.397 19:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:02.397 19:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:02.397 19:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:02.397 19:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.397 19:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.397 [2024-11-27 19:10:11.782976] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:02.397 19:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.397 19:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:02.397 "name": "Existed_Raid", 00:12:02.397 "aliases": [ 00:12:02.397 "729a5cb5-9a64-4805-a905-d4a039c7b2f1" 00:12:02.397 ], 00:12:02.397 "product_name": "Raid Volume", 00:12:02.397 "block_size": 512, 00:12:02.397 "num_blocks": 262144, 00:12:02.397 "uuid": "729a5cb5-9a64-4805-a905-d4a039c7b2f1", 00:12:02.397 "assigned_rate_limits": { 00:12:02.397 "rw_ios_per_sec": 0, 00:12:02.397 "rw_mbytes_per_sec": 0, 00:12:02.397 "r_mbytes_per_sec": 0, 00:12:02.397 "w_mbytes_per_sec": 0 00:12:02.397 }, 00:12:02.397 "claimed": false, 00:12:02.397 "zoned": false, 00:12:02.397 "supported_io_types": { 00:12:02.397 "read": true, 00:12:02.397 "write": true, 00:12:02.397 "unmap": true, 00:12:02.397 "flush": true, 00:12:02.397 "reset": true, 00:12:02.397 "nvme_admin": false, 00:12:02.397 "nvme_io": false, 00:12:02.397 "nvme_io_md": false, 00:12:02.397 "write_zeroes": true, 00:12:02.397 "zcopy": false, 00:12:02.397 "get_zone_info": false, 00:12:02.397 "zone_management": false, 00:12:02.397 "zone_append": false, 00:12:02.397 "compare": false, 00:12:02.397 "compare_and_write": false, 00:12:02.397 "abort": false, 00:12:02.397 "seek_hole": false, 00:12:02.397 "seek_data": false, 00:12:02.397 "copy": false, 00:12:02.397 "nvme_iov_md": false 00:12:02.397 }, 00:12:02.397 "memory_domains": [ 00:12:02.397 { 00:12:02.397 "dma_device_id": "system", 00:12:02.397 "dma_device_type": 1 00:12:02.397 }, 00:12:02.397 { 00:12:02.397 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:02.397 "dma_device_type": 2 00:12:02.397 }, 00:12:02.397 { 00:12:02.397 "dma_device_id": "system", 00:12:02.397 "dma_device_type": 1 00:12:02.397 }, 00:12:02.397 { 00:12:02.397 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:02.397 "dma_device_type": 2 00:12:02.397 }, 00:12:02.397 { 00:12:02.397 "dma_device_id": "system", 00:12:02.397 "dma_device_type": 1 00:12:02.397 }, 00:12:02.397 { 00:12:02.397 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:02.397 "dma_device_type": 2 00:12:02.397 }, 00:12:02.397 { 00:12:02.397 "dma_device_id": "system", 00:12:02.397 "dma_device_type": 1 00:12:02.397 }, 00:12:02.397 { 00:12:02.397 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:02.397 "dma_device_type": 2 00:12:02.397 } 00:12:02.397 ], 00:12:02.397 "driver_specific": { 00:12:02.397 "raid": { 00:12:02.397 "uuid": "729a5cb5-9a64-4805-a905-d4a039c7b2f1", 00:12:02.397 "strip_size_kb": 64, 00:12:02.397 "state": "online", 00:12:02.397 "raid_level": "concat", 00:12:02.397 "superblock": false, 00:12:02.397 "num_base_bdevs": 4, 00:12:02.397 "num_base_bdevs_discovered": 4, 00:12:02.397 "num_base_bdevs_operational": 4, 00:12:02.397 "base_bdevs_list": [ 00:12:02.397 { 00:12:02.397 "name": "NewBaseBdev", 00:12:02.397 "uuid": "0f78335b-2bdc-44ae-a72f-8dc1b7967c2a", 00:12:02.397 "is_configured": true, 00:12:02.397 "data_offset": 0, 00:12:02.397 "data_size": 65536 00:12:02.397 }, 00:12:02.397 { 00:12:02.397 "name": "BaseBdev2", 00:12:02.397 "uuid": "4f37c7ad-5601-48c6-96c5-6cb1e1e991eb", 00:12:02.398 "is_configured": true, 00:12:02.398 "data_offset": 0, 00:12:02.398 "data_size": 65536 00:12:02.398 }, 00:12:02.398 { 00:12:02.398 "name": "BaseBdev3", 00:12:02.398 "uuid": "2f9c47b5-7c7f-4dd8-bd89-6beb72d4a60d", 00:12:02.398 "is_configured": true, 00:12:02.398 "data_offset": 0, 00:12:02.398 "data_size": 65536 00:12:02.398 }, 00:12:02.398 { 00:12:02.398 "name": "BaseBdev4", 00:12:02.398 "uuid": "16b756b6-c039-495b-90f0-861086a2f5cf", 00:12:02.398 "is_configured": true, 00:12:02.398 "data_offset": 0, 00:12:02.398 "data_size": 65536 00:12:02.398 } 00:12:02.398 ] 00:12:02.398 } 00:12:02.398 } 00:12:02.398 }' 00:12:02.398 19:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:02.398 19:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:12:02.398 BaseBdev2 00:12:02.398 BaseBdev3 00:12:02.398 BaseBdev4' 00:12:02.398 19:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:02.398 19:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:02.398 19:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:02.398 19:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:02.398 19:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:12:02.398 19:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.398 19:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.398 19:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.398 19:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:02.398 19:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:02.398 19:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:02.398 19:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:02.398 19:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.398 19:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.398 19:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:02.398 19:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.398 19:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:02.398 19:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:02.398 19:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:02.398 19:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:02.398 19:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:02.398 19:10:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.398 19:10:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.657 19:10:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.657 19:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:02.657 19:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:02.657 19:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:02.657 19:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:02.657 19:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:02.657 19:10:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.657 19:10:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.657 19:10:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.657 19:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:02.657 19:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:02.657 19:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:02.657 19:10:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.657 19:10:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.657 [2024-11-27 19:10:12.113992] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:02.657 [2024-11-27 19:10:12.114071] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:02.657 [2024-11-27 19:10:12.114184] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:02.657 [2024-11-27 19:10:12.114279] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:02.657 [2024-11-27 19:10:12.114354] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:12:02.657 19:10:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.657 19:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 71380 00:12:02.657 19:10:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 71380 ']' 00:12:02.657 19:10:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 71380 00:12:02.657 19:10:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:12:02.657 19:10:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:02.657 19:10:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71380 00:12:02.657 19:10:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:02.657 19:10:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:02.657 19:10:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71380' 00:12:02.657 killing process with pid 71380 00:12:02.657 19:10:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 71380 00:12:02.657 [2024-11-27 19:10:12.152549] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:02.657 19:10:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 71380 00:12:03.226 [2024-11-27 19:10:12.591891] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:04.634 19:10:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:12:04.634 00:12:04.634 real 0m11.752s 00:12:04.634 user 0m18.366s 00:12:04.634 sys 0m2.201s 00:12:04.634 19:10:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:04.634 ************************************ 00:12:04.634 END TEST raid_state_function_test 00:12:04.634 ************************************ 00:12:04.634 19:10:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.634 19:10:13 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:12:04.635 19:10:13 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:04.635 19:10:13 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:04.635 19:10:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:04.635 ************************************ 00:12:04.635 START TEST raid_state_function_test_sb 00:12:04.635 ************************************ 00:12:04.635 19:10:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 4 true 00:12:04.635 19:10:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:12:04.635 19:10:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:12:04.635 19:10:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:12:04.635 19:10:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:04.635 19:10:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:04.635 19:10:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:04.635 19:10:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:04.635 19:10:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:04.635 19:10:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:04.635 19:10:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:04.635 19:10:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:04.635 19:10:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:04.635 19:10:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:12:04.635 19:10:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:04.635 19:10:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:04.635 19:10:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:12:04.635 19:10:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:04.635 19:10:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:04.635 19:10:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:04.635 19:10:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:04.635 19:10:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:04.635 19:10:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:04.635 19:10:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:04.635 19:10:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:04.635 19:10:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:12:04.635 19:10:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:12:04.635 19:10:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:12:04.635 19:10:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:12:04.635 19:10:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:12:04.635 Process raid pid: 72060 00:12:04.635 19:10:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=72060 00:12:04.635 19:10:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:04.635 19:10:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 72060' 00:12:04.635 19:10:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 72060 00:12:04.635 19:10:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 72060 ']' 00:12:04.635 19:10:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:04.635 19:10:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:04.635 19:10:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:04.635 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:04.635 19:10:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:04.635 19:10:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:04.635 [2024-11-27 19:10:14.018369] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:12:04.635 [2024-11-27 19:10:14.018588] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:04.635 [2024-11-27 19:10:14.192485] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:04.894 [2024-11-27 19:10:14.338661] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:05.152 [2024-11-27 19:10:14.585605] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:05.152 [2024-11-27 19:10:14.585735] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:05.412 19:10:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:05.412 19:10:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:12:05.412 19:10:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:05.412 19:10:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.412 19:10:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.412 [2024-11-27 19:10:14.847428] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:05.412 [2024-11-27 19:10:14.847497] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:05.412 [2024-11-27 19:10:14.847523] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:05.412 [2024-11-27 19:10:14.847535] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:05.412 [2024-11-27 19:10:14.847541] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:05.412 [2024-11-27 19:10:14.847550] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:05.412 [2024-11-27 19:10:14.847557] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:05.412 [2024-11-27 19:10:14.847566] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:05.412 19:10:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.412 19:10:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:05.412 19:10:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:05.412 19:10:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:05.412 19:10:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:05.412 19:10:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:05.412 19:10:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:05.412 19:10:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:05.412 19:10:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:05.412 19:10:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:05.412 19:10:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:05.412 19:10:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:05.412 19:10:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:05.412 19:10:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.412 19:10:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.412 19:10:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.412 19:10:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:05.412 "name": "Existed_Raid", 00:12:05.412 "uuid": "3e28690a-f8db-4cad-a10a-b87d5e2f4b39", 00:12:05.412 "strip_size_kb": 64, 00:12:05.412 "state": "configuring", 00:12:05.412 "raid_level": "concat", 00:12:05.412 "superblock": true, 00:12:05.412 "num_base_bdevs": 4, 00:12:05.412 "num_base_bdevs_discovered": 0, 00:12:05.412 "num_base_bdevs_operational": 4, 00:12:05.412 "base_bdevs_list": [ 00:12:05.412 { 00:12:05.412 "name": "BaseBdev1", 00:12:05.412 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:05.412 "is_configured": false, 00:12:05.412 "data_offset": 0, 00:12:05.412 "data_size": 0 00:12:05.412 }, 00:12:05.412 { 00:12:05.412 "name": "BaseBdev2", 00:12:05.412 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:05.412 "is_configured": false, 00:12:05.412 "data_offset": 0, 00:12:05.412 "data_size": 0 00:12:05.412 }, 00:12:05.412 { 00:12:05.412 "name": "BaseBdev3", 00:12:05.412 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:05.412 "is_configured": false, 00:12:05.412 "data_offset": 0, 00:12:05.412 "data_size": 0 00:12:05.412 }, 00:12:05.412 { 00:12:05.412 "name": "BaseBdev4", 00:12:05.412 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:05.412 "is_configured": false, 00:12:05.412 "data_offset": 0, 00:12:05.412 "data_size": 0 00:12:05.412 } 00:12:05.412 ] 00:12:05.412 }' 00:12:05.412 19:10:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:05.412 19:10:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.980 19:10:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:05.980 19:10:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.980 19:10:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.980 [2024-11-27 19:10:15.346528] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:05.980 [2024-11-27 19:10:15.346642] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:12:05.980 19:10:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.980 19:10:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:05.980 19:10:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.980 19:10:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.980 [2024-11-27 19:10:15.358499] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:05.980 [2024-11-27 19:10:15.358583] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:05.980 [2024-11-27 19:10:15.358610] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:05.980 [2024-11-27 19:10:15.358634] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:05.980 [2024-11-27 19:10:15.358652] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:05.980 [2024-11-27 19:10:15.358674] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:05.980 [2024-11-27 19:10:15.358699] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:05.980 [2024-11-27 19:10:15.358722] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:05.980 19:10:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.980 19:10:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:05.980 19:10:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.980 19:10:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.980 [2024-11-27 19:10:15.415631] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:05.980 BaseBdev1 00:12:05.980 19:10:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.980 19:10:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:05.980 19:10:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:05.980 19:10:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:05.980 19:10:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:05.980 19:10:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:05.980 19:10:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:05.980 19:10:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:05.980 19:10:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.980 19:10:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.980 19:10:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.980 19:10:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:05.980 19:10:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.980 19:10:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.980 [ 00:12:05.980 { 00:12:05.980 "name": "BaseBdev1", 00:12:05.980 "aliases": [ 00:12:05.980 "063bc9f9-6437-4e2e-b3d4-9927ad88b784" 00:12:05.980 ], 00:12:05.980 "product_name": "Malloc disk", 00:12:05.980 "block_size": 512, 00:12:05.980 "num_blocks": 65536, 00:12:05.980 "uuid": "063bc9f9-6437-4e2e-b3d4-9927ad88b784", 00:12:05.980 "assigned_rate_limits": { 00:12:05.980 "rw_ios_per_sec": 0, 00:12:05.980 "rw_mbytes_per_sec": 0, 00:12:05.980 "r_mbytes_per_sec": 0, 00:12:05.980 "w_mbytes_per_sec": 0 00:12:05.980 }, 00:12:05.980 "claimed": true, 00:12:05.980 "claim_type": "exclusive_write", 00:12:05.980 "zoned": false, 00:12:05.980 "supported_io_types": { 00:12:05.980 "read": true, 00:12:05.980 "write": true, 00:12:05.980 "unmap": true, 00:12:05.980 "flush": true, 00:12:05.980 "reset": true, 00:12:05.980 "nvme_admin": false, 00:12:05.980 "nvme_io": false, 00:12:05.980 "nvme_io_md": false, 00:12:05.980 "write_zeroes": true, 00:12:05.980 "zcopy": true, 00:12:05.980 "get_zone_info": false, 00:12:05.980 "zone_management": false, 00:12:05.980 "zone_append": false, 00:12:05.980 "compare": false, 00:12:05.980 "compare_and_write": false, 00:12:05.981 "abort": true, 00:12:05.981 "seek_hole": false, 00:12:05.981 "seek_data": false, 00:12:05.981 "copy": true, 00:12:05.981 "nvme_iov_md": false 00:12:05.981 }, 00:12:05.981 "memory_domains": [ 00:12:05.981 { 00:12:05.981 "dma_device_id": "system", 00:12:05.981 "dma_device_type": 1 00:12:05.981 }, 00:12:05.981 { 00:12:05.981 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:05.981 "dma_device_type": 2 00:12:05.981 } 00:12:05.981 ], 00:12:05.981 "driver_specific": {} 00:12:05.981 } 00:12:05.981 ] 00:12:05.981 19:10:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.981 19:10:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:05.981 19:10:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:05.981 19:10:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:05.981 19:10:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:05.981 19:10:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:05.981 19:10:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:05.981 19:10:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:05.981 19:10:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:05.981 19:10:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:05.981 19:10:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:05.981 19:10:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:05.981 19:10:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:05.981 19:10:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:05.981 19:10:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.981 19:10:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.981 19:10:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.981 19:10:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:05.981 "name": "Existed_Raid", 00:12:05.981 "uuid": "3f27e710-7eb7-44eb-8f0f-dd0f5c6a36dc", 00:12:05.981 "strip_size_kb": 64, 00:12:05.981 "state": "configuring", 00:12:05.981 "raid_level": "concat", 00:12:05.981 "superblock": true, 00:12:05.981 "num_base_bdevs": 4, 00:12:05.981 "num_base_bdevs_discovered": 1, 00:12:05.981 "num_base_bdevs_operational": 4, 00:12:05.981 "base_bdevs_list": [ 00:12:05.981 { 00:12:05.981 "name": "BaseBdev1", 00:12:05.981 "uuid": "063bc9f9-6437-4e2e-b3d4-9927ad88b784", 00:12:05.981 "is_configured": true, 00:12:05.981 "data_offset": 2048, 00:12:05.981 "data_size": 63488 00:12:05.981 }, 00:12:05.981 { 00:12:05.981 "name": "BaseBdev2", 00:12:05.981 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:05.981 "is_configured": false, 00:12:05.981 "data_offset": 0, 00:12:05.981 "data_size": 0 00:12:05.981 }, 00:12:05.981 { 00:12:05.981 "name": "BaseBdev3", 00:12:05.981 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:05.981 "is_configured": false, 00:12:05.981 "data_offset": 0, 00:12:05.981 "data_size": 0 00:12:05.981 }, 00:12:05.981 { 00:12:05.981 "name": "BaseBdev4", 00:12:05.981 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:05.981 "is_configured": false, 00:12:05.981 "data_offset": 0, 00:12:05.981 "data_size": 0 00:12:05.981 } 00:12:05.981 ] 00:12:05.981 }' 00:12:05.981 19:10:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:05.981 19:10:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.549 19:10:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:06.549 19:10:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.549 19:10:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.549 [2024-11-27 19:10:15.926867] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:06.549 [2024-11-27 19:10:15.927005] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:12:06.549 19:10:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.549 19:10:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:06.549 19:10:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.549 19:10:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.549 [2024-11-27 19:10:15.938926] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:06.549 [2024-11-27 19:10:15.941148] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:06.549 [2024-11-27 19:10:15.941196] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:06.549 [2024-11-27 19:10:15.941207] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:06.549 [2024-11-27 19:10:15.941219] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:06.549 [2024-11-27 19:10:15.941225] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:06.549 [2024-11-27 19:10:15.941235] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:06.549 19:10:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.549 19:10:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:06.549 19:10:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:06.549 19:10:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:06.549 19:10:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:06.549 19:10:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:06.549 19:10:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:06.549 19:10:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:06.549 19:10:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:06.549 19:10:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:06.549 19:10:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:06.549 19:10:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:06.549 19:10:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:06.549 19:10:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:06.549 19:10:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:06.549 19:10:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.549 19:10:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.549 19:10:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.549 19:10:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:06.549 "name": "Existed_Raid", 00:12:06.549 "uuid": "a6a0f4f2-8cee-42a9-aaac-cf11519f4619", 00:12:06.549 "strip_size_kb": 64, 00:12:06.549 "state": "configuring", 00:12:06.549 "raid_level": "concat", 00:12:06.549 "superblock": true, 00:12:06.549 "num_base_bdevs": 4, 00:12:06.549 "num_base_bdevs_discovered": 1, 00:12:06.549 "num_base_bdevs_operational": 4, 00:12:06.549 "base_bdevs_list": [ 00:12:06.549 { 00:12:06.549 "name": "BaseBdev1", 00:12:06.549 "uuid": "063bc9f9-6437-4e2e-b3d4-9927ad88b784", 00:12:06.549 "is_configured": true, 00:12:06.549 "data_offset": 2048, 00:12:06.549 "data_size": 63488 00:12:06.549 }, 00:12:06.549 { 00:12:06.549 "name": "BaseBdev2", 00:12:06.550 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:06.550 "is_configured": false, 00:12:06.550 "data_offset": 0, 00:12:06.550 "data_size": 0 00:12:06.550 }, 00:12:06.550 { 00:12:06.550 "name": "BaseBdev3", 00:12:06.550 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:06.550 "is_configured": false, 00:12:06.550 "data_offset": 0, 00:12:06.550 "data_size": 0 00:12:06.550 }, 00:12:06.550 { 00:12:06.550 "name": "BaseBdev4", 00:12:06.550 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:06.550 "is_configured": false, 00:12:06.550 "data_offset": 0, 00:12:06.550 "data_size": 0 00:12:06.550 } 00:12:06.550 ] 00:12:06.550 }' 00:12:06.550 19:10:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:06.550 19:10:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.808 19:10:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:06.808 19:10:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.808 19:10:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.067 [2024-11-27 19:10:16.454365] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:07.067 BaseBdev2 00:12:07.067 19:10:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.067 19:10:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:07.067 19:10:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:07.067 19:10:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:07.067 19:10:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:07.067 19:10:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:07.067 19:10:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:07.067 19:10:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:07.067 19:10:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.067 19:10:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.068 19:10:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.068 19:10:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:07.068 19:10:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.068 19:10:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.068 [ 00:12:07.068 { 00:12:07.068 "name": "BaseBdev2", 00:12:07.068 "aliases": [ 00:12:07.068 "d7a1378a-beb0-4d9a-af61-f6167ad7797e" 00:12:07.068 ], 00:12:07.068 "product_name": "Malloc disk", 00:12:07.068 "block_size": 512, 00:12:07.068 "num_blocks": 65536, 00:12:07.068 "uuid": "d7a1378a-beb0-4d9a-af61-f6167ad7797e", 00:12:07.068 "assigned_rate_limits": { 00:12:07.068 "rw_ios_per_sec": 0, 00:12:07.068 "rw_mbytes_per_sec": 0, 00:12:07.068 "r_mbytes_per_sec": 0, 00:12:07.068 "w_mbytes_per_sec": 0 00:12:07.068 }, 00:12:07.068 "claimed": true, 00:12:07.068 "claim_type": "exclusive_write", 00:12:07.068 "zoned": false, 00:12:07.068 "supported_io_types": { 00:12:07.068 "read": true, 00:12:07.068 "write": true, 00:12:07.068 "unmap": true, 00:12:07.068 "flush": true, 00:12:07.068 "reset": true, 00:12:07.068 "nvme_admin": false, 00:12:07.068 "nvme_io": false, 00:12:07.068 "nvme_io_md": false, 00:12:07.068 "write_zeroes": true, 00:12:07.068 "zcopy": true, 00:12:07.068 "get_zone_info": false, 00:12:07.068 "zone_management": false, 00:12:07.068 "zone_append": false, 00:12:07.068 "compare": false, 00:12:07.068 "compare_and_write": false, 00:12:07.068 "abort": true, 00:12:07.068 "seek_hole": false, 00:12:07.068 "seek_data": false, 00:12:07.068 "copy": true, 00:12:07.068 "nvme_iov_md": false 00:12:07.068 }, 00:12:07.068 "memory_domains": [ 00:12:07.068 { 00:12:07.068 "dma_device_id": "system", 00:12:07.068 "dma_device_type": 1 00:12:07.068 }, 00:12:07.068 { 00:12:07.068 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:07.068 "dma_device_type": 2 00:12:07.068 } 00:12:07.068 ], 00:12:07.068 "driver_specific": {} 00:12:07.068 } 00:12:07.068 ] 00:12:07.068 19:10:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.068 19:10:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:07.068 19:10:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:07.068 19:10:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:07.068 19:10:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:07.068 19:10:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:07.068 19:10:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:07.068 19:10:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:07.068 19:10:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:07.068 19:10:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:07.068 19:10:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:07.068 19:10:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:07.068 19:10:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:07.068 19:10:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:07.068 19:10:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:07.068 19:10:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:07.068 19:10:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.068 19:10:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.068 19:10:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.068 19:10:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:07.068 "name": "Existed_Raid", 00:12:07.068 "uuid": "a6a0f4f2-8cee-42a9-aaac-cf11519f4619", 00:12:07.068 "strip_size_kb": 64, 00:12:07.068 "state": "configuring", 00:12:07.068 "raid_level": "concat", 00:12:07.068 "superblock": true, 00:12:07.068 "num_base_bdevs": 4, 00:12:07.068 "num_base_bdevs_discovered": 2, 00:12:07.068 "num_base_bdevs_operational": 4, 00:12:07.068 "base_bdevs_list": [ 00:12:07.068 { 00:12:07.068 "name": "BaseBdev1", 00:12:07.068 "uuid": "063bc9f9-6437-4e2e-b3d4-9927ad88b784", 00:12:07.068 "is_configured": true, 00:12:07.068 "data_offset": 2048, 00:12:07.068 "data_size": 63488 00:12:07.068 }, 00:12:07.068 { 00:12:07.068 "name": "BaseBdev2", 00:12:07.068 "uuid": "d7a1378a-beb0-4d9a-af61-f6167ad7797e", 00:12:07.068 "is_configured": true, 00:12:07.068 "data_offset": 2048, 00:12:07.068 "data_size": 63488 00:12:07.068 }, 00:12:07.068 { 00:12:07.068 "name": "BaseBdev3", 00:12:07.068 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:07.068 "is_configured": false, 00:12:07.068 "data_offset": 0, 00:12:07.068 "data_size": 0 00:12:07.068 }, 00:12:07.068 { 00:12:07.068 "name": "BaseBdev4", 00:12:07.068 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:07.068 "is_configured": false, 00:12:07.068 "data_offset": 0, 00:12:07.068 "data_size": 0 00:12:07.068 } 00:12:07.068 ] 00:12:07.068 }' 00:12:07.068 19:10:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:07.068 19:10:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.326 19:10:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:07.326 19:10:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.326 19:10:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.585 [2024-11-27 19:10:16.995772] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:07.585 BaseBdev3 00:12:07.585 19:10:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.585 19:10:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:12:07.585 19:10:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:07.585 19:10:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:07.585 19:10:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:07.585 19:10:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:07.585 19:10:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:07.585 19:10:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:07.585 19:10:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.585 19:10:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.585 19:10:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.585 19:10:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:07.585 19:10:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.585 19:10:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.585 [ 00:12:07.585 { 00:12:07.585 "name": "BaseBdev3", 00:12:07.585 "aliases": [ 00:12:07.585 "080c182c-6507-40cf-ab6b-1c7f3fb5877e" 00:12:07.585 ], 00:12:07.585 "product_name": "Malloc disk", 00:12:07.585 "block_size": 512, 00:12:07.585 "num_blocks": 65536, 00:12:07.585 "uuid": "080c182c-6507-40cf-ab6b-1c7f3fb5877e", 00:12:07.585 "assigned_rate_limits": { 00:12:07.585 "rw_ios_per_sec": 0, 00:12:07.585 "rw_mbytes_per_sec": 0, 00:12:07.585 "r_mbytes_per_sec": 0, 00:12:07.585 "w_mbytes_per_sec": 0 00:12:07.585 }, 00:12:07.585 "claimed": true, 00:12:07.585 "claim_type": "exclusive_write", 00:12:07.585 "zoned": false, 00:12:07.585 "supported_io_types": { 00:12:07.585 "read": true, 00:12:07.585 "write": true, 00:12:07.585 "unmap": true, 00:12:07.585 "flush": true, 00:12:07.585 "reset": true, 00:12:07.585 "nvme_admin": false, 00:12:07.585 "nvme_io": false, 00:12:07.585 "nvme_io_md": false, 00:12:07.585 "write_zeroes": true, 00:12:07.585 "zcopy": true, 00:12:07.585 "get_zone_info": false, 00:12:07.585 "zone_management": false, 00:12:07.585 "zone_append": false, 00:12:07.585 "compare": false, 00:12:07.585 "compare_and_write": false, 00:12:07.585 "abort": true, 00:12:07.585 "seek_hole": false, 00:12:07.585 "seek_data": false, 00:12:07.585 "copy": true, 00:12:07.585 "nvme_iov_md": false 00:12:07.585 }, 00:12:07.585 "memory_domains": [ 00:12:07.585 { 00:12:07.585 "dma_device_id": "system", 00:12:07.585 "dma_device_type": 1 00:12:07.585 }, 00:12:07.585 { 00:12:07.585 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:07.585 "dma_device_type": 2 00:12:07.585 } 00:12:07.585 ], 00:12:07.585 "driver_specific": {} 00:12:07.585 } 00:12:07.585 ] 00:12:07.585 19:10:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.585 19:10:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:07.585 19:10:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:07.585 19:10:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:07.585 19:10:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:07.585 19:10:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:07.585 19:10:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:07.585 19:10:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:07.586 19:10:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:07.586 19:10:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:07.586 19:10:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:07.586 19:10:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:07.586 19:10:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:07.586 19:10:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:07.586 19:10:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:07.586 19:10:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:07.586 19:10:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.586 19:10:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.586 19:10:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.586 19:10:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:07.586 "name": "Existed_Raid", 00:12:07.586 "uuid": "a6a0f4f2-8cee-42a9-aaac-cf11519f4619", 00:12:07.586 "strip_size_kb": 64, 00:12:07.586 "state": "configuring", 00:12:07.586 "raid_level": "concat", 00:12:07.586 "superblock": true, 00:12:07.586 "num_base_bdevs": 4, 00:12:07.586 "num_base_bdevs_discovered": 3, 00:12:07.586 "num_base_bdevs_operational": 4, 00:12:07.586 "base_bdevs_list": [ 00:12:07.586 { 00:12:07.586 "name": "BaseBdev1", 00:12:07.586 "uuid": "063bc9f9-6437-4e2e-b3d4-9927ad88b784", 00:12:07.586 "is_configured": true, 00:12:07.586 "data_offset": 2048, 00:12:07.586 "data_size": 63488 00:12:07.586 }, 00:12:07.586 { 00:12:07.586 "name": "BaseBdev2", 00:12:07.586 "uuid": "d7a1378a-beb0-4d9a-af61-f6167ad7797e", 00:12:07.586 "is_configured": true, 00:12:07.586 "data_offset": 2048, 00:12:07.586 "data_size": 63488 00:12:07.586 }, 00:12:07.586 { 00:12:07.586 "name": "BaseBdev3", 00:12:07.586 "uuid": "080c182c-6507-40cf-ab6b-1c7f3fb5877e", 00:12:07.586 "is_configured": true, 00:12:07.586 "data_offset": 2048, 00:12:07.586 "data_size": 63488 00:12:07.586 }, 00:12:07.586 { 00:12:07.586 "name": "BaseBdev4", 00:12:07.586 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:07.586 "is_configured": false, 00:12:07.586 "data_offset": 0, 00:12:07.586 "data_size": 0 00:12:07.586 } 00:12:07.586 ] 00:12:07.586 }' 00:12:07.586 19:10:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:07.586 19:10:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.153 19:10:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:08.153 19:10:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.153 19:10:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.153 [2024-11-27 19:10:17.544984] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:08.153 [2024-11-27 19:10:17.545288] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:08.153 [2024-11-27 19:10:17.545305] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:08.153 [2024-11-27 19:10:17.545611] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:08.153 [2024-11-27 19:10:17.545815] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:08.153 [2024-11-27 19:10:17.545831] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:12:08.153 BaseBdev4 00:12:08.153 [2024-11-27 19:10:17.546014] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:08.153 19:10:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.153 19:10:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:12:08.153 19:10:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:12:08.153 19:10:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:08.153 19:10:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:08.153 19:10:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:08.153 19:10:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:08.153 19:10:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:08.153 19:10:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.153 19:10:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.153 19:10:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.153 19:10:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:08.153 19:10:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.153 19:10:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.153 [ 00:12:08.153 { 00:12:08.153 "name": "BaseBdev4", 00:12:08.153 "aliases": [ 00:12:08.153 "dd0a9dd2-a3b3-465e-8e0a-e5d1d5a471b2" 00:12:08.153 ], 00:12:08.153 "product_name": "Malloc disk", 00:12:08.153 "block_size": 512, 00:12:08.153 "num_blocks": 65536, 00:12:08.153 "uuid": "dd0a9dd2-a3b3-465e-8e0a-e5d1d5a471b2", 00:12:08.153 "assigned_rate_limits": { 00:12:08.153 "rw_ios_per_sec": 0, 00:12:08.153 "rw_mbytes_per_sec": 0, 00:12:08.153 "r_mbytes_per_sec": 0, 00:12:08.153 "w_mbytes_per_sec": 0 00:12:08.153 }, 00:12:08.153 "claimed": true, 00:12:08.153 "claim_type": "exclusive_write", 00:12:08.153 "zoned": false, 00:12:08.153 "supported_io_types": { 00:12:08.153 "read": true, 00:12:08.153 "write": true, 00:12:08.153 "unmap": true, 00:12:08.153 "flush": true, 00:12:08.153 "reset": true, 00:12:08.153 "nvme_admin": false, 00:12:08.153 "nvme_io": false, 00:12:08.153 "nvme_io_md": false, 00:12:08.153 "write_zeroes": true, 00:12:08.153 "zcopy": true, 00:12:08.153 "get_zone_info": false, 00:12:08.153 "zone_management": false, 00:12:08.153 "zone_append": false, 00:12:08.153 "compare": false, 00:12:08.153 "compare_and_write": false, 00:12:08.153 "abort": true, 00:12:08.153 "seek_hole": false, 00:12:08.153 "seek_data": false, 00:12:08.153 "copy": true, 00:12:08.153 "nvme_iov_md": false 00:12:08.153 }, 00:12:08.153 "memory_domains": [ 00:12:08.153 { 00:12:08.153 "dma_device_id": "system", 00:12:08.153 "dma_device_type": 1 00:12:08.153 }, 00:12:08.153 { 00:12:08.153 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:08.153 "dma_device_type": 2 00:12:08.153 } 00:12:08.153 ], 00:12:08.153 "driver_specific": {} 00:12:08.153 } 00:12:08.153 ] 00:12:08.153 19:10:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.153 19:10:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:08.153 19:10:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:08.153 19:10:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:08.153 19:10:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:12:08.153 19:10:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:08.153 19:10:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:08.153 19:10:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:08.153 19:10:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:08.153 19:10:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:08.153 19:10:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:08.153 19:10:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:08.153 19:10:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:08.153 19:10:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:08.153 19:10:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.153 19:10:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.153 19:10:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.153 19:10:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:08.153 19:10:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.153 19:10:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:08.153 "name": "Existed_Raid", 00:12:08.153 "uuid": "a6a0f4f2-8cee-42a9-aaac-cf11519f4619", 00:12:08.153 "strip_size_kb": 64, 00:12:08.153 "state": "online", 00:12:08.153 "raid_level": "concat", 00:12:08.153 "superblock": true, 00:12:08.153 "num_base_bdevs": 4, 00:12:08.153 "num_base_bdevs_discovered": 4, 00:12:08.153 "num_base_bdevs_operational": 4, 00:12:08.153 "base_bdevs_list": [ 00:12:08.153 { 00:12:08.153 "name": "BaseBdev1", 00:12:08.153 "uuid": "063bc9f9-6437-4e2e-b3d4-9927ad88b784", 00:12:08.153 "is_configured": true, 00:12:08.153 "data_offset": 2048, 00:12:08.153 "data_size": 63488 00:12:08.153 }, 00:12:08.153 { 00:12:08.153 "name": "BaseBdev2", 00:12:08.153 "uuid": "d7a1378a-beb0-4d9a-af61-f6167ad7797e", 00:12:08.153 "is_configured": true, 00:12:08.153 "data_offset": 2048, 00:12:08.154 "data_size": 63488 00:12:08.154 }, 00:12:08.154 { 00:12:08.154 "name": "BaseBdev3", 00:12:08.154 "uuid": "080c182c-6507-40cf-ab6b-1c7f3fb5877e", 00:12:08.154 "is_configured": true, 00:12:08.154 "data_offset": 2048, 00:12:08.154 "data_size": 63488 00:12:08.154 }, 00:12:08.154 { 00:12:08.154 "name": "BaseBdev4", 00:12:08.154 "uuid": "dd0a9dd2-a3b3-465e-8e0a-e5d1d5a471b2", 00:12:08.154 "is_configured": true, 00:12:08.154 "data_offset": 2048, 00:12:08.154 "data_size": 63488 00:12:08.154 } 00:12:08.154 ] 00:12:08.154 }' 00:12:08.154 19:10:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:08.154 19:10:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.412 19:10:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:12:08.412 19:10:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:08.412 19:10:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:08.412 19:10:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:08.412 19:10:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:12:08.413 19:10:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:08.413 19:10:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:08.413 19:10:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:08.413 19:10:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.413 19:10:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.413 [2024-11-27 19:10:18.016650] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:08.413 19:10:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.673 19:10:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:08.673 "name": "Existed_Raid", 00:12:08.673 "aliases": [ 00:12:08.673 "a6a0f4f2-8cee-42a9-aaac-cf11519f4619" 00:12:08.673 ], 00:12:08.673 "product_name": "Raid Volume", 00:12:08.673 "block_size": 512, 00:12:08.673 "num_blocks": 253952, 00:12:08.673 "uuid": "a6a0f4f2-8cee-42a9-aaac-cf11519f4619", 00:12:08.673 "assigned_rate_limits": { 00:12:08.673 "rw_ios_per_sec": 0, 00:12:08.673 "rw_mbytes_per_sec": 0, 00:12:08.673 "r_mbytes_per_sec": 0, 00:12:08.673 "w_mbytes_per_sec": 0 00:12:08.673 }, 00:12:08.673 "claimed": false, 00:12:08.673 "zoned": false, 00:12:08.673 "supported_io_types": { 00:12:08.673 "read": true, 00:12:08.673 "write": true, 00:12:08.673 "unmap": true, 00:12:08.673 "flush": true, 00:12:08.673 "reset": true, 00:12:08.673 "nvme_admin": false, 00:12:08.673 "nvme_io": false, 00:12:08.673 "nvme_io_md": false, 00:12:08.673 "write_zeroes": true, 00:12:08.673 "zcopy": false, 00:12:08.673 "get_zone_info": false, 00:12:08.673 "zone_management": false, 00:12:08.673 "zone_append": false, 00:12:08.673 "compare": false, 00:12:08.673 "compare_and_write": false, 00:12:08.673 "abort": false, 00:12:08.673 "seek_hole": false, 00:12:08.673 "seek_data": false, 00:12:08.673 "copy": false, 00:12:08.673 "nvme_iov_md": false 00:12:08.673 }, 00:12:08.673 "memory_domains": [ 00:12:08.673 { 00:12:08.673 "dma_device_id": "system", 00:12:08.673 "dma_device_type": 1 00:12:08.673 }, 00:12:08.673 { 00:12:08.673 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:08.673 "dma_device_type": 2 00:12:08.673 }, 00:12:08.673 { 00:12:08.673 "dma_device_id": "system", 00:12:08.673 "dma_device_type": 1 00:12:08.673 }, 00:12:08.673 { 00:12:08.673 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:08.673 "dma_device_type": 2 00:12:08.673 }, 00:12:08.673 { 00:12:08.673 "dma_device_id": "system", 00:12:08.673 "dma_device_type": 1 00:12:08.673 }, 00:12:08.673 { 00:12:08.673 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:08.673 "dma_device_type": 2 00:12:08.673 }, 00:12:08.673 { 00:12:08.673 "dma_device_id": "system", 00:12:08.673 "dma_device_type": 1 00:12:08.673 }, 00:12:08.673 { 00:12:08.673 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:08.673 "dma_device_type": 2 00:12:08.673 } 00:12:08.673 ], 00:12:08.673 "driver_specific": { 00:12:08.673 "raid": { 00:12:08.673 "uuid": "a6a0f4f2-8cee-42a9-aaac-cf11519f4619", 00:12:08.673 "strip_size_kb": 64, 00:12:08.673 "state": "online", 00:12:08.673 "raid_level": "concat", 00:12:08.673 "superblock": true, 00:12:08.673 "num_base_bdevs": 4, 00:12:08.673 "num_base_bdevs_discovered": 4, 00:12:08.673 "num_base_bdevs_operational": 4, 00:12:08.673 "base_bdevs_list": [ 00:12:08.673 { 00:12:08.673 "name": "BaseBdev1", 00:12:08.673 "uuid": "063bc9f9-6437-4e2e-b3d4-9927ad88b784", 00:12:08.673 "is_configured": true, 00:12:08.673 "data_offset": 2048, 00:12:08.673 "data_size": 63488 00:12:08.673 }, 00:12:08.673 { 00:12:08.673 "name": "BaseBdev2", 00:12:08.673 "uuid": "d7a1378a-beb0-4d9a-af61-f6167ad7797e", 00:12:08.673 "is_configured": true, 00:12:08.673 "data_offset": 2048, 00:12:08.673 "data_size": 63488 00:12:08.673 }, 00:12:08.673 { 00:12:08.673 "name": "BaseBdev3", 00:12:08.673 "uuid": "080c182c-6507-40cf-ab6b-1c7f3fb5877e", 00:12:08.673 "is_configured": true, 00:12:08.673 "data_offset": 2048, 00:12:08.673 "data_size": 63488 00:12:08.673 }, 00:12:08.673 { 00:12:08.673 "name": "BaseBdev4", 00:12:08.673 "uuid": "dd0a9dd2-a3b3-465e-8e0a-e5d1d5a471b2", 00:12:08.673 "is_configured": true, 00:12:08.673 "data_offset": 2048, 00:12:08.673 "data_size": 63488 00:12:08.673 } 00:12:08.673 ] 00:12:08.673 } 00:12:08.673 } 00:12:08.673 }' 00:12:08.673 19:10:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:08.673 19:10:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:08.673 BaseBdev2 00:12:08.673 BaseBdev3 00:12:08.673 BaseBdev4' 00:12:08.673 19:10:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:08.673 19:10:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:08.673 19:10:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:08.673 19:10:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:08.673 19:10:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:08.673 19:10:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.673 19:10:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.673 19:10:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.673 19:10:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:08.673 19:10:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:08.673 19:10:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:08.673 19:10:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:08.673 19:10:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:08.673 19:10:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.673 19:10:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.673 19:10:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.674 19:10:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:08.674 19:10:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:08.674 19:10:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:08.674 19:10:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:08.674 19:10:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.674 19:10:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.674 19:10:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:08.674 19:10:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.674 19:10:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:08.674 19:10:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:08.674 19:10:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:08.674 19:10:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:08.674 19:10:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:08.674 19:10:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.674 19:10:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.933 19:10:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.933 19:10:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:08.933 19:10:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:08.933 19:10:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:08.933 19:10:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.933 19:10:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.933 [2024-11-27 19:10:18.347822] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:08.933 [2024-11-27 19:10:18.347859] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:08.933 [2024-11-27 19:10:18.347921] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:08.933 19:10:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.933 19:10:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:08.933 19:10:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:12:08.933 19:10:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:08.933 19:10:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:12:08.933 19:10:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:12:08.933 19:10:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:12:08.933 19:10:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:08.933 19:10:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:12:08.933 19:10:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:08.933 19:10:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:08.933 19:10:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:08.933 19:10:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:08.934 19:10:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:08.934 19:10:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:08.934 19:10:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:08.934 19:10:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.934 19:10:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:08.934 19:10:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.934 19:10:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.934 19:10:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.934 19:10:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:08.934 "name": "Existed_Raid", 00:12:08.934 "uuid": "a6a0f4f2-8cee-42a9-aaac-cf11519f4619", 00:12:08.934 "strip_size_kb": 64, 00:12:08.934 "state": "offline", 00:12:08.934 "raid_level": "concat", 00:12:08.934 "superblock": true, 00:12:08.934 "num_base_bdevs": 4, 00:12:08.934 "num_base_bdevs_discovered": 3, 00:12:08.934 "num_base_bdevs_operational": 3, 00:12:08.934 "base_bdevs_list": [ 00:12:08.934 { 00:12:08.934 "name": null, 00:12:08.934 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:08.934 "is_configured": false, 00:12:08.934 "data_offset": 0, 00:12:08.934 "data_size": 63488 00:12:08.934 }, 00:12:08.934 { 00:12:08.934 "name": "BaseBdev2", 00:12:08.934 "uuid": "d7a1378a-beb0-4d9a-af61-f6167ad7797e", 00:12:08.934 "is_configured": true, 00:12:08.934 "data_offset": 2048, 00:12:08.934 "data_size": 63488 00:12:08.934 }, 00:12:08.934 { 00:12:08.934 "name": "BaseBdev3", 00:12:08.934 "uuid": "080c182c-6507-40cf-ab6b-1c7f3fb5877e", 00:12:08.934 "is_configured": true, 00:12:08.934 "data_offset": 2048, 00:12:08.934 "data_size": 63488 00:12:08.934 }, 00:12:08.934 { 00:12:08.934 "name": "BaseBdev4", 00:12:08.934 "uuid": "dd0a9dd2-a3b3-465e-8e0a-e5d1d5a471b2", 00:12:08.934 "is_configured": true, 00:12:08.934 "data_offset": 2048, 00:12:08.934 "data_size": 63488 00:12:08.934 } 00:12:08.934 ] 00:12:08.934 }' 00:12:08.934 19:10:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:08.934 19:10:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.503 19:10:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:09.503 19:10:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:09.503 19:10:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:09.503 19:10:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:09.503 19:10:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.503 19:10:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.503 19:10:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.503 19:10:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:09.503 19:10:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:09.503 19:10:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:09.503 19:10:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.503 19:10:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.503 [2024-11-27 19:10:18.916851] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:09.503 19:10:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.503 19:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:09.503 19:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:09.503 19:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:09.503 19:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:09.503 19:10:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.503 19:10:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.503 19:10:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.503 19:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:09.504 19:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:09.504 19:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:12:09.504 19:10:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.504 19:10:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.504 [2024-11-27 19:10:19.082162] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:09.762 19:10:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.762 19:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:09.762 19:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:09.762 19:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:09.762 19:10:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.762 19:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:09.762 19:10:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.762 19:10:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.762 19:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:09.762 19:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:09.762 19:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:12:09.762 19:10:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.762 19:10:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.762 [2024-11-27 19:10:19.248514] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:12:09.762 [2024-11-27 19:10:19.248572] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:12:09.762 19:10:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.762 19:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:09.762 19:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:09.762 19:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:09.762 19:10:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.762 19:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:09.762 19:10:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.762 19:10:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.022 19:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:10.022 19:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:10.022 19:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:12:10.022 19:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:12:10.022 19:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:10.022 19:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:10.022 19:10:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.022 19:10:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.022 BaseBdev2 00:12:10.022 19:10:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.022 19:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:12:10.022 19:10:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:10.022 19:10:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:10.022 19:10:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:10.022 19:10:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:10.022 19:10:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:10.022 19:10:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:10.022 19:10:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.022 19:10:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.022 19:10:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.022 19:10:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:10.022 19:10:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.022 19:10:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.022 [ 00:12:10.022 { 00:12:10.022 "name": "BaseBdev2", 00:12:10.022 "aliases": [ 00:12:10.022 "99f683b3-e0d6-49f0-9365-288b9407b5e6" 00:12:10.022 ], 00:12:10.022 "product_name": "Malloc disk", 00:12:10.022 "block_size": 512, 00:12:10.022 "num_blocks": 65536, 00:12:10.022 "uuid": "99f683b3-e0d6-49f0-9365-288b9407b5e6", 00:12:10.022 "assigned_rate_limits": { 00:12:10.022 "rw_ios_per_sec": 0, 00:12:10.022 "rw_mbytes_per_sec": 0, 00:12:10.022 "r_mbytes_per_sec": 0, 00:12:10.022 "w_mbytes_per_sec": 0 00:12:10.022 }, 00:12:10.022 "claimed": false, 00:12:10.022 "zoned": false, 00:12:10.022 "supported_io_types": { 00:12:10.022 "read": true, 00:12:10.022 "write": true, 00:12:10.022 "unmap": true, 00:12:10.022 "flush": true, 00:12:10.022 "reset": true, 00:12:10.022 "nvme_admin": false, 00:12:10.022 "nvme_io": false, 00:12:10.022 "nvme_io_md": false, 00:12:10.022 "write_zeroes": true, 00:12:10.022 "zcopy": true, 00:12:10.022 "get_zone_info": false, 00:12:10.022 "zone_management": false, 00:12:10.022 "zone_append": false, 00:12:10.022 "compare": false, 00:12:10.022 "compare_and_write": false, 00:12:10.022 "abort": true, 00:12:10.022 "seek_hole": false, 00:12:10.022 "seek_data": false, 00:12:10.022 "copy": true, 00:12:10.022 "nvme_iov_md": false 00:12:10.022 }, 00:12:10.022 "memory_domains": [ 00:12:10.022 { 00:12:10.022 "dma_device_id": "system", 00:12:10.022 "dma_device_type": 1 00:12:10.022 }, 00:12:10.022 { 00:12:10.022 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:10.022 "dma_device_type": 2 00:12:10.022 } 00:12:10.022 ], 00:12:10.022 "driver_specific": {} 00:12:10.022 } 00:12:10.022 ] 00:12:10.022 19:10:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.022 19:10:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:10.022 19:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:10.022 19:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:10.022 19:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:10.022 19:10:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.022 19:10:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.022 BaseBdev3 00:12:10.022 19:10:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.022 19:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:12:10.022 19:10:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:10.022 19:10:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:10.022 19:10:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:10.022 19:10:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:10.022 19:10:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:10.022 19:10:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:10.022 19:10:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.022 19:10:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.022 19:10:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.022 19:10:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:10.022 19:10:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.022 19:10:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.022 [ 00:12:10.022 { 00:12:10.022 "name": "BaseBdev3", 00:12:10.022 "aliases": [ 00:12:10.022 "89ff4821-3acb-47c4-a1dc-53659945844c" 00:12:10.022 ], 00:12:10.022 "product_name": "Malloc disk", 00:12:10.022 "block_size": 512, 00:12:10.022 "num_blocks": 65536, 00:12:10.022 "uuid": "89ff4821-3acb-47c4-a1dc-53659945844c", 00:12:10.022 "assigned_rate_limits": { 00:12:10.022 "rw_ios_per_sec": 0, 00:12:10.022 "rw_mbytes_per_sec": 0, 00:12:10.022 "r_mbytes_per_sec": 0, 00:12:10.022 "w_mbytes_per_sec": 0 00:12:10.022 }, 00:12:10.022 "claimed": false, 00:12:10.022 "zoned": false, 00:12:10.022 "supported_io_types": { 00:12:10.022 "read": true, 00:12:10.022 "write": true, 00:12:10.022 "unmap": true, 00:12:10.022 "flush": true, 00:12:10.022 "reset": true, 00:12:10.022 "nvme_admin": false, 00:12:10.022 "nvme_io": false, 00:12:10.022 "nvme_io_md": false, 00:12:10.022 "write_zeroes": true, 00:12:10.022 "zcopy": true, 00:12:10.022 "get_zone_info": false, 00:12:10.022 "zone_management": false, 00:12:10.022 "zone_append": false, 00:12:10.022 "compare": false, 00:12:10.022 "compare_and_write": false, 00:12:10.023 "abort": true, 00:12:10.023 "seek_hole": false, 00:12:10.023 "seek_data": false, 00:12:10.023 "copy": true, 00:12:10.023 "nvme_iov_md": false 00:12:10.023 }, 00:12:10.023 "memory_domains": [ 00:12:10.023 { 00:12:10.023 "dma_device_id": "system", 00:12:10.023 "dma_device_type": 1 00:12:10.023 }, 00:12:10.023 { 00:12:10.023 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:10.023 "dma_device_type": 2 00:12:10.023 } 00:12:10.023 ], 00:12:10.023 "driver_specific": {} 00:12:10.023 } 00:12:10.023 ] 00:12:10.023 19:10:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.023 19:10:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:10.023 19:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:10.023 19:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:10.023 19:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:10.023 19:10:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.023 19:10:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.023 BaseBdev4 00:12:10.023 19:10:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.023 19:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:12:10.023 19:10:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:12:10.023 19:10:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:10.023 19:10:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:10.023 19:10:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:10.023 19:10:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:10.023 19:10:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:10.023 19:10:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.023 19:10:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.023 19:10:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.023 19:10:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:10.023 19:10:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.023 19:10:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.023 [ 00:12:10.023 { 00:12:10.023 "name": "BaseBdev4", 00:12:10.023 "aliases": [ 00:12:10.023 "12301ffb-2b45-4919-934d-94e635dac5b8" 00:12:10.023 ], 00:12:10.023 "product_name": "Malloc disk", 00:12:10.023 "block_size": 512, 00:12:10.023 "num_blocks": 65536, 00:12:10.023 "uuid": "12301ffb-2b45-4919-934d-94e635dac5b8", 00:12:10.023 "assigned_rate_limits": { 00:12:10.023 "rw_ios_per_sec": 0, 00:12:10.023 "rw_mbytes_per_sec": 0, 00:12:10.023 "r_mbytes_per_sec": 0, 00:12:10.023 "w_mbytes_per_sec": 0 00:12:10.023 }, 00:12:10.023 "claimed": false, 00:12:10.023 "zoned": false, 00:12:10.023 "supported_io_types": { 00:12:10.023 "read": true, 00:12:10.023 "write": true, 00:12:10.023 "unmap": true, 00:12:10.023 "flush": true, 00:12:10.023 "reset": true, 00:12:10.023 "nvme_admin": false, 00:12:10.023 "nvme_io": false, 00:12:10.023 "nvme_io_md": false, 00:12:10.023 "write_zeroes": true, 00:12:10.023 "zcopy": true, 00:12:10.023 "get_zone_info": false, 00:12:10.023 "zone_management": false, 00:12:10.023 "zone_append": false, 00:12:10.023 "compare": false, 00:12:10.023 "compare_and_write": false, 00:12:10.023 "abort": true, 00:12:10.023 "seek_hole": false, 00:12:10.023 "seek_data": false, 00:12:10.023 "copy": true, 00:12:10.023 "nvme_iov_md": false 00:12:10.283 }, 00:12:10.283 "memory_domains": [ 00:12:10.283 { 00:12:10.283 "dma_device_id": "system", 00:12:10.283 "dma_device_type": 1 00:12:10.283 }, 00:12:10.283 { 00:12:10.283 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:10.283 "dma_device_type": 2 00:12:10.283 } 00:12:10.283 ], 00:12:10.283 "driver_specific": {} 00:12:10.283 } 00:12:10.283 ] 00:12:10.283 19:10:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.283 19:10:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:10.283 19:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:10.283 19:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:10.283 19:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:10.283 19:10:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.283 19:10:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.283 [2024-11-27 19:10:19.666273] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:10.283 [2024-11-27 19:10:19.666366] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:10.283 [2024-11-27 19:10:19.666395] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:10.283 [2024-11-27 19:10:19.668624] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:10.283 [2024-11-27 19:10:19.668681] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:10.283 19:10:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.283 19:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:10.283 19:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:10.283 19:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:10.283 19:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:10.283 19:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:10.283 19:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:10.283 19:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:10.283 19:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:10.283 19:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:10.283 19:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:10.283 19:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:10.283 19:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:10.283 19:10:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.283 19:10:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.283 19:10:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.283 19:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:10.284 "name": "Existed_Raid", 00:12:10.284 "uuid": "adb58955-db69-4530-8636-a9ee208ed092", 00:12:10.284 "strip_size_kb": 64, 00:12:10.284 "state": "configuring", 00:12:10.284 "raid_level": "concat", 00:12:10.284 "superblock": true, 00:12:10.284 "num_base_bdevs": 4, 00:12:10.284 "num_base_bdevs_discovered": 3, 00:12:10.284 "num_base_bdevs_operational": 4, 00:12:10.284 "base_bdevs_list": [ 00:12:10.284 { 00:12:10.284 "name": "BaseBdev1", 00:12:10.284 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:10.284 "is_configured": false, 00:12:10.284 "data_offset": 0, 00:12:10.284 "data_size": 0 00:12:10.284 }, 00:12:10.284 { 00:12:10.284 "name": "BaseBdev2", 00:12:10.284 "uuid": "99f683b3-e0d6-49f0-9365-288b9407b5e6", 00:12:10.284 "is_configured": true, 00:12:10.284 "data_offset": 2048, 00:12:10.284 "data_size": 63488 00:12:10.284 }, 00:12:10.284 { 00:12:10.284 "name": "BaseBdev3", 00:12:10.284 "uuid": "89ff4821-3acb-47c4-a1dc-53659945844c", 00:12:10.284 "is_configured": true, 00:12:10.284 "data_offset": 2048, 00:12:10.284 "data_size": 63488 00:12:10.284 }, 00:12:10.284 { 00:12:10.284 "name": "BaseBdev4", 00:12:10.284 "uuid": "12301ffb-2b45-4919-934d-94e635dac5b8", 00:12:10.284 "is_configured": true, 00:12:10.284 "data_offset": 2048, 00:12:10.284 "data_size": 63488 00:12:10.284 } 00:12:10.284 ] 00:12:10.284 }' 00:12:10.284 19:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:10.284 19:10:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.543 19:10:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:10.543 19:10:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.543 19:10:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.543 [2024-11-27 19:10:20.121518] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:10.543 19:10:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.543 19:10:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:10.543 19:10:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:10.543 19:10:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:10.543 19:10:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:10.543 19:10:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:10.543 19:10:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:10.543 19:10:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:10.543 19:10:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:10.543 19:10:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:10.543 19:10:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:10.543 19:10:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:10.543 19:10:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:10.543 19:10:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.543 19:10:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.543 19:10:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.543 19:10:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:10.543 "name": "Existed_Raid", 00:12:10.543 "uuid": "adb58955-db69-4530-8636-a9ee208ed092", 00:12:10.543 "strip_size_kb": 64, 00:12:10.543 "state": "configuring", 00:12:10.543 "raid_level": "concat", 00:12:10.543 "superblock": true, 00:12:10.543 "num_base_bdevs": 4, 00:12:10.543 "num_base_bdevs_discovered": 2, 00:12:10.543 "num_base_bdevs_operational": 4, 00:12:10.543 "base_bdevs_list": [ 00:12:10.543 { 00:12:10.543 "name": "BaseBdev1", 00:12:10.543 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:10.543 "is_configured": false, 00:12:10.543 "data_offset": 0, 00:12:10.543 "data_size": 0 00:12:10.543 }, 00:12:10.543 { 00:12:10.543 "name": null, 00:12:10.544 "uuid": "99f683b3-e0d6-49f0-9365-288b9407b5e6", 00:12:10.544 "is_configured": false, 00:12:10.544 "data_offset": 0, 00:12:10.544 "data_size": 63488 00:12:10.544 }, 00:12:10.544 { 00:12:10.544 "name": "BaseBdev3", 00:12:10.544 "uuid": "89ff4821-3acb-47c4-a1dc-53659945844c", 00:12:10.544 "is_configured": true, 00:12:10.544 "data_offset": 2048, 00:12:10.544 "data_size": 63488 00:12:10.544 }, 00:12:10.544 { 00:12:10.544 "name": "BaseBdev4", 00:12:10.544 "uuid": "12301ffb-2b45-4919-934d-94e635dac5b8", 00:12:10.544 "is_configured": true, 00:12:10.544 "data_offset": 2048, 00:12:10.544 "data_size": 63488 00:12:10.544 } 00:12:10.544 ] 00:12:10.544 }' 00:12:10.544 19:10:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:10.544 19:10:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.112 19:10:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:11.112 19:10:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.112 19:10:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.112 19:10:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:11.112 19:10:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.112 19:10:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:12:11.112 19:10:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:11.112 19:10:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.112 19:10:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.112 [2024-11-27 19:10:20.662778] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:11.112 BaseBdev1 00:12:11.112 19:10:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.112 19:10:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:12:11.112 19:10:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:11.112 19:10:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:11.112 19:10:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:11.112 19:10:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:11.112 19:10:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:11.112 19:10:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:11.112 19:10:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.112 19:10:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.112 19:10:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.112 19:10:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:11.112 19:10:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.112 19:10:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.112 [ 00:12:11.112 { 00:12:11.112 "name": "BaseBdev1", 00:12:11.112 "aliases": [ 00:12:11.112 "62eff4f8-1386-4572-8e58-47f207f72f2c" 00:12:11.112 ], 00:12:11.112 "product_name": "Malloc disk", 00:12:11.112 "block_size": 512, 00:12:11.112 "num_blocks": 65536, 00:12:11.112 "uuid": "62eff4f8-1386-4572-8e58-47f207f72f2c", 00:12:11.112 "assigned_rate_limits": { 00:12:11.112 "rw_ios_per_sec": 0, 00:12:11.112 "rw_mbytes_per_sec": 0, 00:12:11.112 "r_mbytes_per_sec": 0, 00:12:11.112 "w_mbytes_per_sec": 0 00:12:11.112 }, 00:12:11.112 "claimed": true, 00:12:11.112 "claim_type": "exclusive_write", 00:12:11.112 "zoned": false, 00:12:11.112 "supported_io_types": { 00:12:11.112 "read": true, 00:12:11.112 "write": true, 00:12:11.112 "unmap": true, 00:12:11.112 "flush": true, 00:12:11.112 "reset": true, 00:12:11.112 "nvme_admin": false, 00:12:11.112 "nvme_io": false, 00:12:11.112 "nvme_io_md": false, 00:12:11.112 "write_zeroes": true, 00:12:11.112 "zcopy": true, 00:12:11.112 "get_zone_info": false, 00:12:11.112 "zone_management": false, 00:12:11.112 "zone_append": false, 00:12:11.112 "compare": false, 00:12:11.112 "compare_and_write": false, 00:12:11.112 "abort": true, 00:12:11.112 "seek_hole": false, 00:12:11.112 "seek_data": false, 00:12:11.112 "copy": true, 00:12:11.112 "nvme_iov_md": false 00:12:11.112 }, 00:12:11.112 "memory_domains": [ 00:12:11.112 { 00:12:11.112 "dma_device_id": "system", 00:12:11.112 "dma_device_type": 1 00:12:11.112 }, 00:12:11.112 { 00:12:11.112 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:11.112 "dma_device_type": 2 00:12:11.112 } 00:12:11.112 ], 00:12:11.112 "driver_specific": {} 00:12:11.112 } 00:12:11.112 ] 00:12:11.112 19:10:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.112 19:10:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:11.112 19:10:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:11.112 19:10:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:11.112 19:10:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:11.112 19:10:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:11.112 19:10:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:11.112 19:10:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:11.112 19:10:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:11.112 19:10:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:11.112 19:10:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:11.112 19:10:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:11.112 19:10:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:11.112 19:10:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:11.112 19:10:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.112 19:10:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.112 19:10:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.372 19:10:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:11.372 "name": "Existed_Raid", 00:12:11.372 "uuid": "adb58955-db69-4530-8636-a9ee208ed092", 00:12:11.372 "strip_size_kb": 64, 00:12:11.372 "state": "configuring", 00:12:11.372 "raid_level": "concat", 00:12:11.372 "superblock": true, 00:12:11.372 "num_base_bdevs": 4, 00:12:11.372 "num_base_bdevs_discovered": 3, 00:12:11.372 "num_base_bdevs_operational": 4, 00:12:11.372 "base_bdevs_list": [ 00:12:11.372 { 00:12:11.372 "name": "BaseBdev1", 00:12:11.372 "uuid": "62eff4f8-1386-4572-8e58-47f207f72f2c", 00:12:11.372 "is_configured": true, 00:12:11.372 "data_offset": 2048, 00:12:11.372 "data_size": 63488 00:12:11.372 }, 00:12:11.372 { 00:12:11.372 "name": null, 00:12:11.372 "uuid": "99f683b3-e0d6-49f0-9365-288b9407b5e6", 00:12:11.372 "is_configured": false, 00:12:11.372 "data_offset": 0, 00:12:11.372 "data_size": 63488 00:12:11.372 }, 00:12:11.372 { 00:12:11.372 "name": "BaseBdev3", 00:12:11.372 "uuid": "89ff4821-3acb-47c4-a1dc-53659945844c", 00:12:11.372 "is_configured": true, 00:12:11.372 "data_offset": 2048, 00:12:11.372 "data_size": 63488 00:12:11.372 }, 00:12:11.372 { 00:12:11.372 "name": "BaseBdev4", 00:12:11.372 "uuid": "12301ffb-2b45-4919-934d-94e635dac5b8", 00:12:11.372 "is_configured": true, 00:12:11.372 "data_offset": 2048, 00:12:11.372 "data_size": 63488 00:12:11.372 } 00:12:11.372 ] 00:12:11.372 }' 00:12:11.372 19:10:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:11.372 19:10:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.631 19:10:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:11.631 19:10:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.631 19:10:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.631 19:10:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:11.631 19:10:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.631 19:10:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:12:11.631 19:10:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:12:11.631 19:10:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.631 19:10:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.631 [2024-11-27 19:10:21.213900] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:11.631 19:10:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.631 19:10:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:11.631 19:10:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:11.631 19:10:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:11.631 19:10:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:11.631 19:10:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:11.631 19:10:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:11.631 19:10:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:11.631 19:10:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:11.631 19:10:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:11.631 19:10:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:11.631 19:10:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:11.631 19:10:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:11.631 19:10:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.631 19:10:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.631 19:10:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.889 19:10:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:11.890 "name": "Existed_Raid", 00:12:11.890 "uuid": "adb58955-db69-4530-8636-a9ee208ed092", 00:12:11.890 "strip_size_kb": 64, 00:12:11.890 "state": "configuring", 00:12:11.890 "raid_level": "concat", 00:12:11.890 "superblock": true, 00:12:11.890 "num_base_bdevs": 4, 00:12:11.890 "num_base_bdevs_discovered": 2, 00:12:11.890 "num_base_bdevs_operational": 4, 00:12:11.890 "base_bdevs_list": [ 00:12:11.890 { 00:12:11.890 "name": "BaseBdev1", 00:12:11.890 "uuid": "62eff4f8-1386-4572-8e58-47f207f72f2c", 00:12:11.890 "is_configured": true, 00:12:11.890 "data_offset": 2048, 00:12:11.890 "data_size": 63488 00:12:11.890 }, 00:12:11.890 { 00:12:11.890 "name": null, 00:12:11.890 "uuid": "99f683b3-e0d6-49f0-9365-288b9407b5e6", 00:12:11.890 "is_configured": false, 00:12:11.890 "data_offset": 0, 00:12:11.890 "data_size": 63488 00:12:11.890 }, 00:12:11.890 { 00:12:11.890 "name": null, 00:12:11.890 "uuid": "89ff4821-3acb-47c4-a1dc-53659945844c", 00:12:11.890 "is_configured": false, 00:12:11.890 "data_offset": 0, 00:12:11.890 "data_size": 63488 00:12:11.890 }, 00:12:11.890 { 00:12:11.890 "name": "BaseBdev4", 00:12:11.890 "uuid": "12301ffb-2b45-4919-934d-94e635dac5b8", 00:12:11.890 "is_configured": true, 00:12:11.890 "data_offset": 2048, 00:12:11.890 "data_size": 63488 00:12:11.890 } 00:12:11.890 ] 00:12:11.890 }' 00:12:11.890 19:10:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:11.890 19:10:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.149 19:10:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:12.149 19:10:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.149 19:10:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.149 19:10:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:12.149 19:10:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.149 19:10:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:12:12.149 19:10:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:12.149 19:10:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.149 19:10:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.149 [2024-11-27 19:10:21.713028] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:12.149 19:10:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.149 19:10:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:12.149 19:10:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:12.149 19:10:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:12.149 19:10:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:12.149 19:10:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:12.149 19:10:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:12.149 19:10:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:12.149 19:10:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:12.149 19:10:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:12.149 19:10:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:12.149 19:10:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:12.149 19:10:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.149 19:10:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.150 19:10:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:12.150 19:10:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.150 19:10:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:12.150 "name": "Existed_Raid", 00:12:12.150 "uuid": "adb58955-db69-4530-8636-a9ee208ed092", 00:12:12.150 "strip_size_kb": 64, 00:12:12.150 "state": "configuring", 00:12:12.150 "raid_level": "concat", 00:12:12.150 "superblock": true, 00:12:12.150 "num_base_bdevs": 4, 00:12:12.150 "num_base_bdevs_discovered": 3, 00:12:12.150 "num_base_bdevs_operational": 4, 00:12:12.150 "base_bdevs_list": [ 00:12:12.150 { 00:12:12.150 "name": "BaseBdev1", 00:12:12.150 "uuid": "62eff4f8-1386-4572-8e58-47f207f72f2c", 00:12:12.150 "is_configured": true, 00:12:12.150 "data_offset": 2048, 00:12:12.150 "data_size": 63488 00:12:12.150 }, 00:12:12.150 { 00:12:12.150 "name": null, 00:12:12.150 "uuid": "99f683b3-e0d6-49f0-9365-288b9407b5e6", 00:12:12.150 "is_configured": false, 00:12:12.150 "data_offset": 0, 00:12:12.150 "data_size": 63488 00:12:12.150 }, 00:12:12.150 { 00:12:12.150 "name": "BaseBdev3", 00:12:12.150 "uuid": "89ff4821-3acb-47c4-a1dc-53659945844c", 00:12:12.150 "is_configured": true, 00:12:12.150 "data_offset": 2048, 00:12:12.150 "data_size": 63488 00:12:12.150 }, 00:12:12.150 { 00:12:12.150 "name": "BaseBdev4", 00:12:12.150 "uuid": "12301ffb-2b45-4919-934d-94e635dac5b8", 00:12:12.150 "is_configured": true, 00:12:12.150 "data_offset": 2048, 00:12:12.150 "data_size": 63488 00:12:12.150 } 00:12:12.150 ] 00:12:12.150 }' 00:12:12.150 19:10:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:12.150 19:10:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.721 19:10:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:12.721 19:10:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:12.721 19:10:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.721 19:10:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.721 19:10:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.721 19:10:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:12:12.721 19:10:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:12.721 19:10:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.721 19:10:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.721 [2024-11-27 19:10:22.176281] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:12.721 19:10:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.721 19:10:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:12.721 19:10:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:12.721 19:10:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:12.721 19:10:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:12.721 19:10:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:12.721 19:10:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:12.721 19:10:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:12.722 19:10:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:12.722 19:10:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:12.722 19:10:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:12.722 19:10:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:12.722 19:10:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:12.722 19:10:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.722 19:10:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.722 19:10:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.722 19:10:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:12.722 "name": "Existed_Raid", 00:12:12.722 "uuid": "adb58955-db69-4530-8636-a9ee208ed092", 00:12:12.722 "strip_size_kb": 64, 00:12:12.722 "state": "configuring", 00:12:12.722 "raid_level": "concat", 00:12:12.722 "superblock": true, 00:12:12.722 "num_base_bdevs": 4, 00:12:12.722 "num_base_bdevs_discovered": 2, 00:12:12.722 "num_base_bdevs_operational": 4, 00:12:12.722 "base_bdevs_list": [ 00:12:12.722 { 00:12:12.722 "name": null, 00:12:12.722 "uuid": "62eff4f8-1386-4572-8e58-47f207f72f2c", 00:12:12.722 "is_configured": false, 00:12:12.722 "data_offset": 0, 00:12:12.722 "data_size": 63488 00:12:12.722 }, 00:12:12.722 { 00:12:12.722 "name": null, 00:12:12.722 "uuid": "99f683b3-e0d6-49f0-9365-288b9407b5e6", 00:12:12.722 "is_configured": false, 00:12:12.722 "data_offset": 0, 00:12:12.722 "data_size": 63488 00:12:12.722 }, 00:12:12.722 { 00:12:12.722 "name": "BaseBdev3", 00:12:12.722 "uuid": "89ff4821-3acb-47c4-a1dc-53659945844c", 00:12:12.722 "is_configured": true, 00:12:12.722 "data_offset": 2048, 00:12:12.722 "data_size": 63488 00:12:12.722 }, 00:12:12.722 { 00:12:12.722 "name": "BaseBdev4", 00:12:12.722 "uuid": "12301ffb-2b45-4919-934d-94e635dac5b8", 00:12:12.722 "is_configured": true, 00:12:12.722 "data_offset": 2048, 00:12:12.722 "data_size": 63488 00:12:12.722 } 00:12:12.722 ] 00:12:12.722 }' 00:12:12.722 19:10:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:12.722 19:10:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.298 19:10:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:13.298 19:10:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:13.298 19:10:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.298 19:10:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.298 19:10:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.298 19:10:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:12:13.298 19:10:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:13.298 19:10:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.298 19:10:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.298 [2024-11-27 19:10:22.770709] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:13.298 19:10:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.298 19:10:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:13.298 19:10:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:13.298 19:10:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:13.298 19:10:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:13.298 19:10:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:13.298 19:10:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:13.298 19:10:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:13.298 19:10:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:13.298 19:10:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:13.298 19:10:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:13.298 19:10:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:13.298 19:10:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.298 19:10:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:13.298 19:10:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.298 19:10:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.298 19:10:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:13.298 "name": "Existed_Raid", 00:12:13.298 "uuid": "adb58955-db69-4530-8636-a9ee208ed092", 00:12:13.298 "strip_size_kb": 64, 00:12:13.298 "state": "configuring", 00:12:13.298 "raid_level": "concat", 00:12:13.298 "superblock": true, 00:12:13.298 "num_base_bdevs": 4, 00:12:13.298 "num_base_bdevs_discovered": 3, 00:12:13.298 "num_base_bdevs_operational": 4, 00:12:13.298 "base_bdevs_list": [ 00:12:13.298 { 00:12:13.298 "name": null, 00:12:13.298 "uuid": "62eff4f8-1386-4572-8e58-47f207f72f2c", 00:12:13.298 "is_configured": false, 00:12:13.298 "data_offset": 0, 00:12:13.298 "data_size": 63488 00:12:13.298 }, 00:12:13.298 { 00:12:13.298 "name": "BaseBdev2", 00:12:13.298 "uuid": "99f683b3-e0d6-49f0-9365-288b9407b5e6", 00:12:13.298 "is_configured": true, 00:12:13.298 "data_offset": 2048, 00:12:13.298 "data_size": 63488 00:12:13.298 }, 00:12:13.298 { 00:12:13.298 "name": "BaseBdev3", 00:12:13.298 "uuid": "89ff4821-3acb-47c4-a1dc-53659945844c", 00:12:13.298 "is_configured": true, 00:12:13.298 "data_offset": 2048, 00:12:13.298 "data_size": 63488 00:12:13.298 }, 00:12:13.298 { 00:12:13.299 "name": "BaseBdev4", 00:12:13.299 "uuid": "12301ffb-2b45-4919-934d-94e635dac5b8", 00:12:13.299 "is_configured": true, 00:12:13.299 "data_offset": 2048, 00:12:13.299 "data_size": 63488 00:12:13.299 } 00:12:13.299 ] 00:12:13.299 }' 00:12:13.299 19:10:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:13.299 19:10:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.883 19:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:13.883 19:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:13.883 19:10:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.883 19:10:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.883 19:10:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.883 19:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:12:13.883 19:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:13.883 19:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:13.883 19:10:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.883 19:10:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.883 19:10:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.883 19:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 62eff4f8-1386-4572-8e58-47f207f72f2c 00:12:13.883 19:10:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.883 19:10:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.883 [2024-11-27 19:10:23.364403] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:13.883 [2024-11-27 19:10:23.364671] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:13.883 [2024-11-27 19:10:23.364685] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:13.883 [2024-11-27 19:10:23.365099] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:12:13.883 [2024-11-27 19:10:23.365306] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:13.883 [2024-11-27 19:10:23.365348] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raNewBaseBdev 00:12:13.883 id_bdev 0x617000008200 00:12:13.883 [2024-11-27 19:10:23.365587] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:13.883 19:10:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.883 19:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:12:13.883 19:10:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:12:13.883 19:10:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:13.883 19:10:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:13.883 19:10:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:13.883 19:10:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:13.883 19:10:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:13.883 19:10:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.883 19:10:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.883 19:10:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.883 19:10:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:13.883 19:10:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.883 19:10:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.883 [ 00:12:13.883 { 00:12:13.883 "name": "NewBaseBdev", 00:12:13.883 "aliases": [ 00:12:13.883 "62eff4f8-1386-4572-8e58-47f207f72f2c" 00:12:13.883 ], 00:12:13.883 "product_name": "Malloc disk", 00:12:13.883 "block_size": 512, 00:12:13.883 "num_blocks": 65536, 00:12:13.883 "uuid": "62eff4f8-1386-4572-8e58-47f207f72f2c", 00:12:13.883 "assigned_rate_limits": { 00:12:13.883 "rw_ios_per_sec": 0, 00:12:13.883 "rw_mbytes_per_sec": 0, 00:12:13.883 "r_mbytes_per_sec": 0, 00:12:13.883 "w_mbytes_per_sec": 0 00:12:13.883 }, 00:12:13.883 "claimed": true, 00:12:13.883 "claim_type": "exclusive_write", 00:12:13.883 "zoned": false, 00:12:13.883 "supported_io_types": { 00:12:13.883 "read": true, 00:12:13.883 "write": true, 00:12:13.883 "unmap": true, 00:12:13.883 "flush": true, 00:12:13.883 "reset": true, 00:12:13.883 "nvme_admin": false, 00:12:13.883 "nvme_io": false, 00:12:13.883 "nvme_io_md": false, 00:12:13.883 "write_zeroes": true, 00:12:13.883 "zcopy": true, 00:12:13.883 "get_zone_info": false, 00:12:13.883 "zone_management": false, 00:12:13.883 "zone_append": false, 00:12:13.883 "compare": false, 00:12:13.883 "compare_and_write": false, 00:12:13.883 "abort": true, 00:12:13.884 "seek_hole": false, 00:12:13.884 "seek_data": false, 00:12:13.884 "copy": true, 00:12:13.884 "nvme_iov_md": false 00:12:13.884 }, 00:12:13.884 "memory_domains": [ 00:12:13.884 { 00:12:13.884 "dma_device_id": "system", 00:12:13.884 "dma_device_type": 1 00:12:13.884 }, 00:12:13.884 { 00:12:13.884 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:13.884 "dma_device_type": 2 00:12:13.884 } 00:12:13.884 ], 00:12:13.884 "driver_specific": {} 00:12:13.884 } 00:12:13.884 ] 00:12:13.884 19:10:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.884 19:10:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:13.884 19:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:12:13.884 19:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:13.884 19:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:13.884 19:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:13.884 19:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:13.884 19:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:13.884 19:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:13.884 19:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:13.884 19:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:13.884 19:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:13.884 19:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:13.884 19:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:13.884 19:10:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.884 19:10:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.884 19:10:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.884 19:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:13.884 "name": "Existed_Raid", 00:12:13.884 "uuid": "adb58955-db69-4530-8636-a9ee208ed092", 00:12:13.884 "strip_size_kb": 64, 00:12:13.884 "state": "online", 00:12:13.884 "raid_level": "concat", 00:12:13.884 "superblock": true, 00:12:13.884 "num_base_bdevs": 4, 00:12:13.884 "num_base_bdevs_discovered": 4, 00:12:13.884 "num_base_bdevs_operational": 4, 00:12:13.884 "base_bdevs_list": [ 00:12:13.884 { 00:12:13.884 "name": "NewBaseBdev", 00:12:13.884 "uuid": "62eff4f8-1386-4572-8e58-47f207f72f2c", 00:12:13.884 "is_configured": true, 00:12:13.884 "data_offset": 2048, 00:12:13.884 "data_size": 63488 00:12:13.884 }, 00:12:13.884 { 00:12:13.884 "name": "BaseBdev2", 00:12:13.884 "uuid": "99f683b3-e0d6-49f0-9365-288b9407b5e6", 00:12:13.884 "is_configured": true, 00:12:13.884 "data_offset": 2048, 00:12:13.884 "data_size": 63488 00:12:13.884 }, 00:12:13.884 { 00:12:13.884 "name": "BaseBdev3", 00:12:13.884 "uuid": "89ff4821-3acb-47c4-a1dc-53659945844c", 00:12:13.884 "is_configured": true, 00:12:13.884 "data_offset": 2048, 00:12:13.884 "data_size": 63488 00:12:13.884 }, 00:12:13.884 { 00:12:13.884 "name": "BaseBdev4", 00:12:13.884 "uuid": "12301ffb-2b45-4919-934d-94e635dac5b8", 00:12:13.884 "is_configured": true, 00:12:13.884 "data_offset": 2048, 00:12:13.884 "data_size": 63488 00:12:13.884 } 00:12:13.884 ] 00:12:13.884 }' 00:12:13.884 19:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:13.884 19:10:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.452 19:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:12:14.452 19:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:14.452 19:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:14.452 19:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:14.452 19:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:12:14.452 19:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:14.452 19:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:14.452 19:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:14.452 19:10:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.452 19:10:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.452 [2024-11-27 19:10:23.880055] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:14.452 19:10:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.452 19:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:14.452 "name": "Existed_Raid", 00:12:14.452 "aliases": [ 00:12:14.452 "adb58955-db69-4530-8636-a9ee208ed092" 00:12:14.452 ], 00:12:14.452 "product_name": "Raid Volume", 00:12:14.452 "block_size": 512, 00:12:14.452 "num_blocks": 253952, 00:12:14.452 "uuid": "adb58955-db69-4530-8636-a9ee208ed092", 00:12:14.452 "assigned_rate_limits": { 00:12:14.452 "rw_ios_per_sec": 0, 00:12:14.452 "rw_mbytes_per_sec": 0, 00:12:14.452 "r_mbytes_per_sec": 0, 00:12:14.452 "w_mbytes_per_sec": 0 00:12:14.452 }, 00:12:14.452 "claimed": false, 00:12:14.452 "zoned": false, 00:12:14.452 "supported_io_types": { 00:12:14.452 "read": true, 00:12:14.452 "write": true, 00:12:14.452 "unmap": true, 00:12:14.452 "flush": true, 00:12:14.452 "reset": true, 00:12:14.452 "nvme_admin": false, 00:12:14.452 "nvme_io": false, 00:12:14.452 "nvme_io_md": false, 00:12:14.452 "write_zeroes": true, 00:12:14.452 "zcopy": false, 00:12:14.452 "get_zone_info": false, 00:12:14.452 "zone_management": false, 00:12:14.452 "zone_append": false, 00:12:14.452 "compare": false, 00:12:14.452 "compare_and_write": false, 00:12:14.452 "abort": false, 00:12:14.452 "seek_hole": false, 00:12:14.452 "seek_data": false, 00:12:14.452 "copy": false, 00:12:14.453 "nvme_iov_md": false 00:12:14.453 }, 00:12:14.453 "memory_domains": [ 00:12:14.453 { 00:12:14.453 "dma_device_id": "system", 00:12:14.453 "dma_device_type": 1 00:12:14.453 }, 00:12:14.453 { 00:12:14.453 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:14.453 "dma_device_type": 2 00:12:14.453 }, 00:12:14.453 { 00:12:14.453 "dma_device_id": "system", 00:12:14.453 "dma_device_type": 1 00:12:14.453 }, 00:12:14.453 { 00:12:14.453 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:14.453 "dma_device_type": 2 00:12:14.453 }, 00:12:14.453 { 00:12:14.453 "dma_device_id": "system", 00:12:14.453 "dma_device_type": 1 00:12:14.453 }, 00:12:14.453 { 00:12:14.453 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:14.453 "dma_device_type": 2 00:12:14.453 }, 00:12:14.453 { 00:12:14.453 "dma_device_id": "system", 00:12:14.453 "dma_device_type": 1 00:12:14.453 }, 00:12:14.453 { 00:12:14.453 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:14.453 "dma_device_type": 2 00:12:14.453 } 00:12:14.453 ], 00:12:14.453 "driver_specific": { 00:12:14.453 "raid": { 00:12:14.453 "uuid": "adb58955-db69-4530-8636-a9ee208ed092", 00:12:14.453 "strip_size_kb": 64, 00:12:14.453 "state": "online", 00:12:14.453 "raid_level": "concat", 00:12:14.453 "superblock": true, 00:12:14.453 "num_base_bdevs": 4, 00:12:14.453 "num_base_bdevs_discovered": 4, 00:12:14.453 "num_base_bdevs_operational": 4, 00:12:14.453 "base_bdevs_list": [ 00:12:14.453 { 00:12:14.453 "name": "NewBaseBdev", 00:12:14.453 "uuid": "62eff4f8-1386-4572-8e58-47f207f72f2c", 00:12:14.453 "is_configured": true, 00:12:14.453 "data_offset": 2048, 00:12:14.453 "data_size": 63488 00:12:14.453 }, 00:12:14.453 { 00:12:14.453 "name": "BaseBdev2", 00:12:14.453 "uuid": "99f683b3-e0d6-49f0-9365-288b9407b5e6", 00:12:14.453 "is_configured": true, 00:12:14.453 "data_offset": 2048, 00:12:14.453 "data_size": 63488 00:12:14.453 }, 00:12:14.453 { 00:12:14.453 "name": "BaseBdev3", 00:12:14.453 "uuid": "89ff4821-3acb-47c4-a1dc-53659945844c", 00:12:14.453 "is_configured": true, 00:12:14.453 "data_offset": 2048, 00:12:14.453 "data_size": 63488 00:12:14.453 }, 00:12:14.453 { 00:12:14.453 "name": "BaseBdev4", 00:12:14.453 "uuid": "12301ffb-2b45-4919-934d-94e635dac5b8", 00:12:14.453 "is_configured": true, 00:12:14.453 "data_offset": 2048, 00:12:14.453 "data_size": 63488 00:12:14.453 } 00:12:14.453 ] 00:12:14.453 } 00:12:14.453 } 00:12:14.453 }' 00:12:14.453 19:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:14.453 19:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:12:14.453 BaseBdev2 00:12:14.453 BaseBdev3 00:12:14.453 BaseBdev4' 00:12:14.453 19:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:14.453 19:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:14.453 19:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:14.453 19:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:14.453 19:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:12:14.453 19:10:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.453 19:10:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.453 19:10:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.453 19:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:14.453 19:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:14.453 19:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:14.453 19:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:14.453 19:10:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.453 19:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:14.453 19:10:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.453 19:10:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.453 19:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:14.453 19:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:14.453 19:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:14.453 19:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:14.453 19:10:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.453 19:10:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.453 19:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:14.453 19:10:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.713 19:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:14.713 19:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:14.713 19:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:14.713 19:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:14.713 19:10:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.713 19:10:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.713 19:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:14.713 19:10:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.713 19:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:14.713 19:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:14.713 19:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:14.713 19:10:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.713 19:10:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.713 [2024-11-27 19:10:24.151180] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:14.713 [2024-11-27 19:10:24.151342] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:14.713 [2024-11-27 19:10:24.151479] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:14.713 [2024-11-27 19:10:24.151564] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:14.713 [2024-11-27 19:10:24.151575] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:12:14.713 19:10:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.713 19:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 72060 00:12:14.713 19:10:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 72060 ']' 00:12:14.713 19:10:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 72060 00:12:14.713 19:10:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:12:14.713 19:10:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:14.713 19:10:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72060 00:12:14.713 killing process with pid 72060 00:12:14.713 19:10:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:14.713 19:10:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:14.713 19:10:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72060' 00:12:14.713 19:10:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 72060 00:12:14.713 [2024-11-27 19:10:24.192147] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:14.713 19:10:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 72060 00:12:15.283 [2024-11-27 19:10:24.620974] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:16.222 19:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:12:16.222 00:12:16.222 real 0m11.942s 00:12:16.222 user 0m18.682s 00:12:16.222 sys 0m2.317s 00:12:16.222 19:10:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:16.222 19:10:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.222 ************************************ 00:12:16.222 END TEST raid_state_function_test_sb 00:12:16.222 ************************************ 00:12:16.481 19:10:25 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:12:16.481 19:10:25 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:16.481 19:10:25 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:16.481 19:10:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:16.481 ************************************ 00:12:16.481 START TEST raid_superblock_test 00:12:16.481 ************************************ 00:12:16.481 19:10:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 4 00:12:16.481 19:10:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:12:16.481 19:10:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:12:16.481 19:10:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:12:16.481 19:10:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:12:16.481 19:10:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:12:16.481 19:10:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:12:16.481 19:10:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:12:16.481 19:10:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:12:16.481 19:10:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:12:16.481 19:10:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:12:16.481 19:10:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:12:16.481 19:10:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:12:16.481 19:10:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:12:16.481 19:10:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:12:16.481 19:10:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:12:16.481 19:10:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:12:16.481 19:10:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=72734 00:12:16.481 19:10:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 72734 00:12:16.481 19:10:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:12:16.481 19:10:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 72734 ']' 00:12:16.481 19:10:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:16.481 19:10:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:16.481 19:10:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:16.481 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:16.481 19:10:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:16.481 19:10:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.481 [2024-11-27 19:10:26.022613] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:12:16.481 [2024-11-27 19:10:26.022837] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72734 ] 00:12:16.741 [2024-11-27 19:10:26.200587] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:16.741 [2024-11-27 19:10:26.340993] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:17.001 [2024-11-27 19:10:26.581770] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:17.001 [2024-11-27 19:10:26.581843] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:17.261 19:10:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:17.261 19:10:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:12:17.261 19:10:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:12:17.261 19:10:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:17.261 19:10:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:12:17.261 19:10:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:12:17.261 19:10:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:12:17.261 19:10:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:17.261 19:10:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:17.261 19:10:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:17.261 19:10:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:12:17.261 19:10:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.261 19:10:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.523 malloc1 00:12:17.523 19:10:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.523 19:10:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:17.523 19:10:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.523 19:10:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.523 [2024-11-27 19:10:26.918151] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:17.523 [2024-11-27 19:10:26.918213] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:17.523 [2024-11-27 19:10:26.918236] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:17.523 [2024-11-27 19:10:26.918246] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:17.524 [2024-11-27 19:10:26.920745] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:17.524 [2024-11-27 19:10:26.920820] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:17.524 pt1 00:12:17.524 19:10:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.524 19:10:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:17.524 19:10:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:17.524 19:10:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:12:17.524 19:10:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:12:17.524 19:10:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:12:17.524 19:10:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:17.524 19:10:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:17.524 19:10:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:17.524 19:10:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:12:17.524 19:10:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.524 19:10:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.524 malloc2 00:12:17.524 19:10:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.524 19:10:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:17.524 19:10:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.524 19:10:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.524 [2024-11-27 19:10:26.981312] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:17.524 [2024-11-27 19:10:26.981409] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:17.524 [2024-11-27 19:10:26.981455] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:17.524 [2024-11-27 19:10:26.981484] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:17.524 [2024-11-27 19:10:26.983928] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:17.524 [2024-11-27 19:10:26.983999] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:17.524 pt2 00:12:17.524 19:10:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.524 19:10:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:17.524 19:10:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:17.524 19:10:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:12:17.524 19:10:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:12:17.524 19:10:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:12:17.524 19:10:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:17.524 19:10:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:17.524 19:10:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:17.524 19:10:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:12:17.524 19:10:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.524 19:10:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.524 malloc3 00:12:17.524 19:10:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.524 19:10:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:17.524 19:10:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.524 19:10:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.524 [2024-11-27 19:10:27.060621] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:17.524 [2024-11-27 19:10:27.060744] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:17.524 [2024-11-27 19:10:27.060786] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:17.524 [2024-11-27 19:10:27.060825] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:17.524 [2024-11-27 19:10:27.063205] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:17.524 [2024-11-27 19:10:27.063282] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:17.524 pt3 00:12:17.524 19:10:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.524 19:10:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:17.524 19:10:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:17.524 19:10:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:12:17.524 19:10:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:12:17.524 19:10:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:12:17.524 19:10:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:17.524 19:10:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:17.524 19:10:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:17.524 19:10:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:12:17.524 19:10:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.524 19:10:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.524 malloc4 00:12:17.524 19:10:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.524 19:10:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:17.524 19:10:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.524 19:10:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.524 [2024-11-27 19:10:27.127059] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:17.524 [2024-11-27 19:10:27.127162] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:17.524 [2024-11-27 19:10:27.127190] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:12:17.524 [2024-11-27 19:10:27.127199] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:17.524 [2024-11-27 19:10:27.129628] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:17.524 [2024-11-27 19:10:27.129664] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:17.524 pt4 00:12:17.524 19:10:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.524 19:10:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:17.524 19:10:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:17.524 19:10:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:12:17.524 19:10:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.524 19:10:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.524 [2024-11-27 19:10:27.139071] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:17.524 [2024-11-27 19:10:27.141187] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:17.524 [2024-11-27 19:10:27.141275] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:17.524 [2024-11-27 19:10:27.141325] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:17.524 [2024-11-27 19:10:27.141514] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:17.524 [2024-11-27 19:10:27.141525] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:17.524 [2024-11-27 19:10:27.141819] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:17.524 [2024-11-27 19:10:27.142017] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:17.524 [2024-11-27 19:10:27.142035] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:17.524 [2024-11-27 19:10:27.142189] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:17.524 19:10:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.524 19:10:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:12:17.524 19:10:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:17.524 19:10:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:17.524 19:10:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:17.524 19:10:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:17.524 19:10:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:17.524 19:10:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:17.524 19:10:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:17.524 19:10:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:17.524 19:10:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:17.524 19:10:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:17.524 19:10:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.524 19:10:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:17.524 19:10:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.784 19:10:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.784 19:10:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:17.784 "name": "raid_bdev1", 00:12:17.784 "uuid": "d6c5442d-c378-4477-bdef-94ac802003a8", 00:12:17.784 "strip_size_kb": 64, 00:12:17.784 "state": "online", 00:12:17.784 "raid_level": "concat", 00:12:17.784 "superblock": true, 00:12:17.784 "num_base_bdevs": 4, 00:12:17.784 "num_base_bdevs_discovered": 4, 00:12:17.784 "num_base_bdevs_operational": 4, 00:12:17.784 "base_bdevs_list": [ 00:12:17.784 { 00:12:17.784 "name": "pt1", 00:12:17.784 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:17.784 "is_configured": true, 00:12:17.784 "data_offset": 2048, 00:12:17.784 "data_size": 63488 00:12:17.784 }, 00:12:17.784 { 00:12:17.784 "name": "pt2", 00:12:17.784 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:17.784 "is_configured": true, 00:12:17.784 "data_offset": 2048, 00:12:17.784 "data_size": 63488 00:12:17.784 }, 00:12:17.784 { 00:12:17.784 "name": "pt3", 00:12:17.784 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:17.784 "is_configured": true, 00:12:17.784 "data_offset": 2048, 00:12:17.784 "data_size": 63488 00:12:17.784 }, 00:12:17.784 { 00:12:17.784 "name": "pt4", 00:12:17.784 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:17.784 "is_configured": true, 00:12:17.784 "data_offset": 2048, 00:12:17.784 "data_size": 63488 00:12:17.784 } 00:12:17.784 ] 00:12:17.784 }' 00:12:17.784 19:10:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:17.784 19:10:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.044 19:10:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:12:18.044 19:10:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:18.044 19:10:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:18.044 19:10:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:18.044 19:10:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:18.044 19:10:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:18.044 19:10:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:18.044 19:10:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:18.044 19:10:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.044 19:10:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.044 [2024-11-27 19:10:27.626641] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:18.044 19:10:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.044 19:10:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:18.044 "name": "raid_bdev1", 00:12:18.044 "aliases": [ 00:12:18.044 "d6c5442d-c378-4477-bdef-94ac802003a8" 00:12:18.044 ], 00:12:18.044 "product_name": "Raid Volume", 00:12:18.044 "block_size": 512, 00:12:18.044 "num_blocks": 253952, 00:12:18.044 "uuid": "d6c5442d-c378-4477-bdef-94ac802003a8", 00:12:18.044 "assigned_rate_limits": { 00:12:18.044 "rw_ios_per_sec": 0, 00:12:18.044 "rw_mbytes_per_sec": 0, 00:12:18.044 "r_mbytes_per_sec": 0, 00:12:18.044 "w_mbytes_per_sec": 0 00:12:18.044 }, 00:12:18.044 "claimed": false, 00:12:18.044 "zoned": false, 00:12:18.044 "supported_io_types": { 00:12:18.044 "read": true, 00:12:18.044 "write": true, 00:12:18.044 "unmap": true, 00:12:18.044 "flush": true, 00:12:18.044 "reset": true, 00:12:18.044 "nvme_admin": false, 00:12:18.044 "nvme_io": false, 00:12:18.044 "nvme_io_md": false, 00:12:18.044 "write_zeroes": true, 00:12:18.044 "zcopy": false, 00:12:18.044 "get_zone_info": false, 00:12:18.044 "zone_management": false, 00:12:18.044 "zone_append": false, 00:12:18.044 "compare": false, 00:12:18.044 "compare_and_write": false, 00:12:18.044 "abort": false, 00:12:18.044 "seek_hole": false, 00:12:18.044 "seek_data": false, 00:12:18.044 "copy": false, 00:12:18.044 "nvme_iov_md": false 00:12:18.044 }, 00:12:18.044 "memory_domains": [ 00:12:18.044 { 00:12:18.044 "dma_device_id": "system", 00:12:18.044 "dma_device_type": 1 00:12:18.044 }, 00:12:18.044 { 00:12:18.044 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:18.044 "dma_device_type": 2 00:12:18.044 }, 00:12:18.044 { 00:12:18.044 "dma_device_id": "system", 00:12:18.044 "dma_device_type": 1 00:12:18.044 }, 00:12:18.044 { 00:12:18.044 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:18.044 "dma_device_type": 2 00:12:18.044 }, 00:12:18.044 { 00:12:18.044 "dma_device_id": "system", 00:12:18.044 "dma_device_type": 1 00:12:18.044 }, 00:12:18.044 { 00:12:18.044 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:18.044 "dma_device_type": 2 00:12:18.044 }, 00:12:18.044 { 00:12:18.044 "dma_device_id": "system", 00:12:18.044 "dma_device_type": 1 00:12:18.044 }, 00:12:18.044 { 00:12:18.044 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:18.044 "dma_device_type": 2 00:12:18.044 } 00:12:18.044 ], 00:12:18.044 "driver_specific": { 00:12:18.044 "raid": { 00:12:18.044 "uuid": "d6c5442d-c378-4477-bdef-94ac802003a8", 00:12:18.044 "strip_size_kb": 64, 00:12:18.044 "state": "online", 00:12:18.044 "raid_level": "concat", 00:12:18.044 "superblock": true, 00:12:18.044 "num_base_bdevs": 4, 00:12:18.044 "num_base_bdevs_discovered": 4, 00:12:18.045 "num_base_bdevs_operational": 4, 00:12:18.045 "base_bdevs_list": [ 00:12:18.045 { 00:12:18.045 "name": "pt1", 00:12:18.045 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:18.045 "is_configured": true, 00:12:18.045 "data_offset": 2048, 00:12:18.045 "data_size": 63488 00:12:18.045 }, 00:12:18.045 { 00:12:18.045 "name": "pt2", 00:12:18.045 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:18.045 "is_configured": true, 00:12:18.045 "data_offset": 2048, 00:12:18.045 "data_size": 63488 00:12:18.045 }, 00:12:18.045 { 00:12:18.045 "name": "pt3", 00:12:18.045 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:18.045 "is_configured": true, 00:12:18.045 "data_offset": 2048, 00:12:18.045 "data_size": 63488 00:12:18.045 }, 00:12:18.045 { 00:12:18.045 "name": "pt4", 00:12:18.045 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:18.045 "is_configured": true, 00:12:18.045 "data_offset": 2048, 00:12:18.045 "data_size": 63488 00:12:18.045 } 00:12:18.045 ] 00:12:18.045 } 00:12:18.045 } 00:12:18.045 }' 00:12:18.045 19:10:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:18.305 19:10:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:18.305 pt2 00:12:18.305 pt3 00:12:18.305 pt4' 00:12:18.305 19:10:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:18.305 19:10:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:18.305 19:10:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:18.305 19:10:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:18.305 19:10:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:18.305 19:10:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.305 19:10:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.305 19:10:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.305 19:10:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:18.305 19:10:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:18.305 19:10:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:18.305 19:10:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:18.305 19:10:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:18.305 19:10:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.305 19:10:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.305 19:10:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.305 19:10:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:18.305 19:10:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:18.305 19:10:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:18.305 19:10:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:12:18.305 19:10:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.305 19:10:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:18.305 19:10:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.305 19:10:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.305 19:10:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:18.305 19:10:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:18.305 19:10:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:18.305 19:10:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:12:18.305 19:10:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:18.305 19:10:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.305 19:10:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.305 19:10:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.305 19:10:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:18.305 19:10:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:18.305 19:10:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:18.305 19:10:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:12:18.305 19:10:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.305 19:10:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.305 [2024-11-27 19:10:27.922068] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:18.566 19:10:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.566 19:10:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=d6c5442d-c378-4477-bdef-94ac802003a8 00:12:18.566 19:10:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z d6c5442d-c378-4477-bdef-94ac802003a8 ']' 00:12:18.566 19:10:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:18.566 19:10:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.566 19:10:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.566 [2024-11-27 19:10:27.969654] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:18.566 [2024-11-27 19:10:27.969684] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:18.566 [2024-11-27 19:10:27.969800] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:18.566 [2024-11-27 19:10:27.969880] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:18.566 [2024-11-27 19:10:27.969897] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:18.566 19:10:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.566 19:10:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:18.566 19:10:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.566 19:10:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:12:18.566 19:10:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.566 19:10:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.566 19:10:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:12:18.566 19:10:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:12:18.566 19:10:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:18.566 19:10:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:12:18.566 19:10:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.566 19:10:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.566 19:10:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.566 19:10:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:18.566 19:10:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:12:18.566 19:10:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.566 19:10:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.566 19:10:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.566 19:10:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:18.566 19:10:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:12:18.566 19:10:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.566 19:10:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.566 19:10:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.566 19:10:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:18.566 19:10:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:12:18.566 19:10:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.566 19:10:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.566 19:10:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.566 19:10:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:12:18.566 19:10:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.566 19:10:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:12:18.566 19:10:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.566 19:10:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.566 19:10:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:12:18.566 19:10:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:18.566 19:10:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:12:18.566 19:10:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:18.566 19:10:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:12:18.566 19:10:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:18.566 19:10:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:12:18.566 19:10:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:18.566 19:10:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:18.566 19:10:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.566 19:10:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.566 [2024-11-27 19:10:28.141395] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:12:18.566 [2024-11-27 19:10:28.143640] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:12:18.566 [2024-11-27 19:10:28.143752] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:12:18.566 [2024-11-27 19:10:28.143811] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:12:18.566 [2024-11-27 19:10:28.143887] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:12:18.566 [2024-11-27 19:10:28.143985] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:12:18.566 [2024-11-27 19:10:28.144070] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:12:18.566 [2024-11-27 19:10:28.144128] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:12:18.566 [2024-11-27 19:10:28.144180] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:18.566 [2024-11-27 19:10:28.144214] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:12:18.566 request: 00:12:18.566 { 00:12:18.566 "name": "raid_bdev1", 00:12:18.566 "raid_level": "concat", 00:12:18.566 "base_bdevs": [ 00:12:18.566 "malloc1", 00:12:18.566 "malloc2", 00:12:18.566 "malloc3", 00:12:18.566 "malloc4" 00:12:18.566 ], 00:12:18.566 "strip_size_kb": 64, 00:12:18.566 "superblock": false, 00:12:18.566 "method": "bdev_raid_create", 00:12:18.566 "req_id": 1 00:12:18.566 } 00:12:18.566 Got JSON-RPC error response 00:12:18.566 response: 00:12:18.566 { 00:12:18.566 "code": -17, 00:12:18.566 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:12:18.566 } 00:12:18.566 19:10:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:12:18.566 19:10:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:12:18.566 19:10:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:18.566 19:10:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:18.566 19:10:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:18.566 19:10:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:12:18.566 19:10:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:18.566 19:10:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.566 19:10:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.566 19:10:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.566 19:10:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:12:18.566 19:10:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:12:18.566 19:10:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:18.566 19:10:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.566 19:10:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.566 [2024-11-27 19:10:28.193250] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:18.566 [2024-11-27 19:10:28.193352] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:18.566 [2024-11-27 19:10:28.193393] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:18.566 [2024-11-27 19:10:28.193426] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:18.566 [2024-11-27 19:10:28.196067] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:18.567 [2024-11-27 19:10:28.196152] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:18.567 [2024-11-27 19:10:28.196276] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:12:18.567 [2024-11-27 19:10:28.196385] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:18.567 pt1 00:12:18.567 19:10:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.567 19:10:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:12:18.567 19:10:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:18.567 19:10:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:18.567 19:10:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:18.567 19:10:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:18.567 19:10:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:18.826 19:10:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:18.826 19:10:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:18.826 19:10:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:18.826 19:10:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:18.826 19:10:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:18.826 19:10:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:18.826 19:10:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.826 19:10:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.826 19:10:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.826 19:10:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:18.826 "name": "raid_bdev1", 00:12:18.826 "uuid": "d6c5442d-c378-4477-bdef-94ac802003a8", 00:12:18.826 "strip_size_kb": 64, 00:12:18.826 "state": "configuring", 00:12:18.826 "raid_level": "concat", 00:12:18.826 "superblock": true, 00:12:18.826 "num_base_bdevs": 4, 00:12:18.826 "num_base_bdevs_discovered": 1, 00:12:18.826 "num_base_bdevs_operational": 4, 00:12:18.826 "base_bdevs_list": [ 00:12:18.826 { 00:12:18.826 "name": "pt1", 00:12:18.826 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:18.826 "is_configured": true, 00:12:18.826 "data_offset": 2048, 00:12:18.826 "data_size": 63488 00:12:18.826 }, 00:12:18.826 { 00:12:18.826 "name": null, 00:12:18.826 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:18.826 "is_configured": false, 00:12:18.826 "data_offset": 2048, 00:12:18.827 "data_size": 63488 00:12:18.827 }, 00:12:18.827 { 00:12:18.827 "name": null, 00:12:18.827 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:18.827 "is_configured": false, 00:12:18.827 "data_offset": 2048, 00:12:18.827 "data_size": 63488 00:12:18.827 }, 00:12:18.827 { 00:12:18.827 "name": null, 00:12:18.827 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:18.827 "is_configured": false, 00:12:18.827 "data_offset": 2048, 00:12:18.827 "data_size": 63488 00:12:18.827 } 00:12:18.827 ] 00:12:18.827 }' 00:12:18.827 19:10:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:18.827 19:10:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.086 19:10:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:12:19.086 19:10:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:19.086 19:10:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.086 19:10:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.086 [2024-11-27 19:10:28.656521] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:19.086 [2024-11-27 19:10:28.656675] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:19.086 [2024-11-27 19:10:28.656719] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:12:19.086 [2024-11-27 19:10:28.656733] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:19.086 [2024-11-27 19:10:28.657262] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:19.086 [2024-11-27 19:10:28.657284] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:19.086 [2024-11-27 19:10:28.657379] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:19.086 [2024-11-27 19:10:28.657407] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:19.086 pt2 00:12:19.086 19:10:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.086 19:10:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:12:19.086 19:10:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.086 19:10:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.086 [2024-11-27 19:10:28.668486] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:12:19.086 19:10:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.086 19:10:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:12:19.086 19:10:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:19.086 19:10:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:19.086 19:10:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:19.086 19:10:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:19.086 19:10:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:19.086 19:10:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:19.086 19:10:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:19.086 19:10:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:19.086 19:10:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:19.086 19:10:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:19.086 19:10:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.086 19:10:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.086 19:10:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:19.086 19:10:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.346 19:10:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:19.346 "name": "raid_bdev1", 00:12:19.346 "uuid": "d6c5442d-c378-4477-bdef-94ac802003a8", 00:12:19.346 "strip_size_kb": 64, 00:12:19.346 "state": "configuring", 00:12:19.346 "raid_level": "concat", 00:12:19.346 "superblock": true, 00:12:19.346 "num_base_bdevs": 4, 00:12:19.346 "num_base_bdevs_discovered": 1, 00:12:19.346 "num_base_bdevs_operational": 4, 00:12:19.346 "base_bdevs_list": [ 00:12:19.346 { 00:12:19.346 "name": "pt1", 00:12:19.346 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:19.346 "is_configured": true, 00:12:19.346 "data_offset": 2048, 00:12:19.346 "data_size": 63488 00:12:19.346 }, 00:12:19.346 { 00:12:19.346 "name": null, 00:12:19.346 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:19.346 "is_configured": false, 00:12:19.346 "data_offset": 0, 00:12:19.346 "data_size": 63488 00:12:19.346 }, 00:12:19.346 { 00:12:19.346 "name": null, 00:12:19.346 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:19.346 "is_configured": false, 00:12:19.346 "data_offset": 2048, 00:12:19.346 "data_size": 63488 00:12:19.346 }, 00:12:19.346 { 00:12:19.346 "name": null, 00:12:19.346 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:19.346 "is_configured": false, 00:12:19.346 "data_offset": 2048, 00:12:19.346 "data_size": 63488 00:12:19.346 } 00:12:19.346 ] 00:12:19.346 }' 00:12:19.346 19:10:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:19.346 19:10:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.606 19:10:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:12:19.606 19:10:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:19.606 19:10:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:19.606 19:10:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.606 19:10:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.606 [2024-11-27 19:10:29.163725] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:19.606 [2024-11-27 19:10:29.163864] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:19.606 [2024-11-27 19:10:29.163905] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:12:19.606 [2024-11-27 19:10:29.163945] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:19.606 [2024-11-27 19:10:29.164521] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:19.606 [2024-11-27 19:10:29.164581] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:19.606 [2024-11-27 19:10:29.164737] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:19.606 [2024-11-27 19:10:29.164799] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:19.606 pt2 00:12:19.606 19:10:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.606 19:10:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:19.606 19:10:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:19.606 19:10:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:19.606 19:10:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.606 19:10:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.606 [2024-11-27 19:10:29.175641] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:19.606 [2024-11-27 19:10:29.175741] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:19.606 [2024-11-27 19:10:29.175778] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:12:19.606 [2024-11-27 19:10:29.175809] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:19.606 [2024-11-27 19:10:29.176243] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:19.606 [2024-11-27 19:10:29.176299] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:19.606 [2024-11-27 19:10:29.176390] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:12:19.606 [2024-11-27 19:10:29.176444] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:19.606 pt3 00:12:19.606 19:10:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.606 19:10:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:19.606 19:10:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:19.606 19:10:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:19.606 19:10:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.606 19:10:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.606 [2024-11-27 19:10:29.187594] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:19.606 [2024-11-27 19:10:29.187674] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:19.606 [2024-11-27 19:10:29.187724] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:12:19.606 [2024-11-27 19:10:29.187757] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:19.606 [2024-11-27 19:10:29.188181] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:19.606 [2024-11-27 19:10:29.188236] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:19.606 [2024-11-27 19:10:29.188332] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:12:19.606 [2024-11-27 19:10:29.188382] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:19.606 [2024-11-27 19:10:29.188576] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:19.606 [2024-11-27 19:10:29.188614] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:19.606 [2024-11-27 19:10:29.188911] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:12:19.606 [2024-11-27 19:10:29.189088] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:19.606 [2024-11-27 19:10:29.189102] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:12:19.606 [2024-11-27 19:10:29.189230] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:19.606 pt4 00:12:19.606 19:10:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.606 19:10:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:19.606 19:10:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:19.606 19:10:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:12:19.606 19:10:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:19.606 19:10:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:19.606 19:10:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:19.606 19:10:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:19.606 19:10:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:19.606 19:10:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:19.606 19:10:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:19.606 19:10:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:19.606 19:10:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:19.606 19:10:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:19.606 19:10:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:19.606 19:10:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.606 19:10:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.606 19:10:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.866 19:10:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:19.866 "name": "raid_bdev1", 00:12:19.866 "uuid": "d6c5442d-c378-4477-bdef-94ac802003a8", 00:12:19.866 "strip_size_kb": 64, 00:12:19.866 "state": "online", 00:12:19.866 "raid_level": "concat", 00:12:19.866 "superblock": true, 00:12:19.866 "num_base_bdevs": 4, 00:12:19.866 "num_base_bdevs_discovered": 4, 00:12:19.866 "num_base_bdevs_operational": 4, 00:12:19.866 "base_bdevs_list": [ 00:12:19.866 { 00:12:19.866 "name": "pt1", 00:12:19.866 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:19.866 "is_configured": true, 00:12:19.866 "data_offset": 2048, 00:12:19.866 "data_size": 63488 00:12:19.866 }, 00:12:19.866 { 00:12:19.866 "name": "pt2", 00:12:19.866 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:19.866 "is_configured": true, 00:12:19.866 "data_offset": 2048, 00:12:19.866 "data_size": 63488 00:12:19.866 }, 00:12:19.866 { 00:12:19.866 "name": "pt3", 00:12:19.866 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:19.866 "is_configured": true, 00:12:19.866 "data_offset": 2048, 00:12:19.866 "data_size": 63488 00:12:19.866 }, 00:12:19.866 { 00:12:19.866 "name": "pt4", 00:12:19.866 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:19.866 "is_configured": true, 00:12:19.866 "data_offset": 2048, 00:12:19.866 "data_size": 63488 00:12:19.866 } 00:12:19.866 ] 00:12:19.866 }' 00:12:19.866 19:10:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:19.866 19:10:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.124 19:10:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:12:20.124 19:10:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:20.124 19:10:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:20.124 19:10:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:20.124 19:10:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:20.124 19:10:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:20.124 19:10:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:20.124 19:10:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:20.124 19:10:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.124 19:10:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.124 [2024-11-27 19:10:29.647239] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:20.124 19:10:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.124 19:10:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:20.124 "name": "raid_bdev1", 00:12:20.124 "aliases": [ 00:12:20.124 "d6c5442d-c378-4477-bdef-94ac802003a8" 00:12:20.124 ], 00:12:20.124 "product_name": "Raid Volume", 00:12:20.124 "block_size": 512, 00:12:20.124 "num_blocks": 253952, 00:12:20.124 "uuid": "d6c5442d-c378-4477-bdef-94ac802003a8", 00:12:20.124 "assigned_rate_limits": { 00:12:20.124 "rw_ios_per_sec": 0, 00:12:20.124 "rw_mbytes_per_sec": 0, 00:12:20.124 "r_mbytes_per_sec": 0, 00:12:20.124 "w_mbytes_per_sec": 0 00:12:20.124 }, 00:12:20.124 "claimed": false, 00:12:20.124 "zoned": false, 00:12:20.124 "supported_io_types": { 00:12:20.124 "read": true, 00:12:20.124 "write": true, 00:12:20.124 "unmap": true, 00:12:20.124 "flush": true, 00:12:20.124 "reset": true, 00:12:20.124 "nvme_admin": false, 00:12:20.124 "nvme_io": false, 00:12:20.124 "nvme_io_md": false, 00:12:20.124 "write_zeroes": true, 00:12:20.124 "zcopy": false, 00:12:20.124 "get_zone_info": false, 00:12:20.124 "zone_management": false, 00:12:20.124 "zone_append": false, 00:12:20.124 "compare": false, 00:12:20.124 "compare_and_write": false, 00:12:20.124 "abort": false, 00:12:20.124 "seek_hole": false, 00:12:20.124 "seek_data": false, 00:12:20.124 "copy": false, 00:12:20.124 "nvme_iov_md": false 00:12:20.124 }, 00:12:20.124 "memory_domains": [ 00:12:20.124 { 00:12:20.124 "dma_device_id": "system", 00:12:20.124 "dma_device_type": 1 00:12:20.124 }, 00:12:20.124 { 00:12:20.124 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:20.124 "dma_device_type": 2 00:12:20.124 }, 00:12:20.124 { 00:12:20.124 "dma_device_id": "system", 00:12:20.124 "dma_device_type": 1 00:12:20.124 }, 00:12:20.124 { 00:12:20.124 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:20.124 "dma_device_type": 2 00:12:20.124 }, 00:12:20.124 { 00:12:20.124 "dma_device_id": "system", 00:12:20.124 "dma_device_type": 1 00:12:20.124 }, 00:12:20.124 { 00:12:20.124 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:20.124 "dma_device_type": 2 00:12:20.124 }, 00:12:20.124 { 00:12:20.124 "dma_device_id": "system", 00:12:20.124 "dma_device_type": 1 00:12:20.124 }, 00:12:20.124 { 00:12:20.124 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:20.124 "dma_device_type": 2 00:12:20.124 } 00:12:20.124 ], 00:12:20.124 "driver_specific": { 00:12:20.124 "raid": { 00:12:20.124 "uuid": "d6c5442d-c378-4477-bdef-94ac802003a8", 00:12:20.124 "strip_size_kb": 64, 00:12:20.124 "state": "online", 00:12:20.124 "raid_level": "concat", 00:12:20.124 "superblock": true, 00:12:20.124 "num_base_bdevs": 4, 00:12:20.124 "num_base_bdevs_discovered": 4, 00:12:20.124 "num_base_bdevs_operational": 4, 00:12:20.124 "base_bdevs_list": [ 00:12:20.124 { 00:12:20.124 "name": "pt1", 00:12:20.124 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:20.124 "is_configured": true, 00:12:20.124 "data_offset": 2048, 00:12:20.124 "data_size": 63488 00:12:20.124 }, 00:12:20.124 { 00:12:20.124 "name": "pt2", 00:12:20.124 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:20.124 "is_configured": true, 00:12:20.125 "data_offset": 2048, 00:12:20.125 "data_size": 63488 00:12:20.125 }, 00:12:20.125 { 00:12:20.125 "name": "pt3", 00:12:20.125 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:20.125 "is_configured": true, 00:12:20.125 "data_offset": 2048, 00:12:20.125 "data_size": 63488 00:12:20.125 }, 00:12:20.125 { 00:12:20.125 "name": "pt4", 00:12:20.125 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:20.125 "is_configured": true, 00:12:20.125 "data_offset": 2048, 00:12:20.125 "data_size": 63488 00:12:20.125 } 00:12:20.125 ] 00:12:20.125 } 00:12:20.125 } 00:12:20.125 }' 00:12:20.125 19:10:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:20.125 19:10:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:20.125 pt2 00:12:20.125 pt3 00:12:20.125 pt4' 00:12:20.125 19:10:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:20.125 19:10:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:20.125 19:10:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:20.125 19:10:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:20.125 19:10:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:20.125 19:10:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.125 19:10:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.125 19:10:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.385 19:10:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:20.385 19:10:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:20.385 19:10:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:20.385 19:10:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:20.385 19:10:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.385 19:10:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.385 19:10:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:20.385 19:10:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.385 19:10:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:20.385 19:10:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:20.385 19:10:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:20.385 19:10:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:12:20.385 19:10:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.385 19:10:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.385 19:10:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:20.385 19:10:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.385 19:10:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:20.385 19:10:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:20.385 19:10:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:20.385 19:10:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:20.385 19:10:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:12:20.385 19:10:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.385 19:10:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.385 19:10:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.385 19:10:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:20.385 19:10:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:20.385 19:10:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:20.385 19:10:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:12:20.385 19:10:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.385 19:10:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.385 [2024-11-27 19:10:29.906688] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:20.385 19:10:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.385 19:10:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' d6c5442d-c378-4477-bdef-94ac802003a8 '!=' d6c5442d-c378-4477-bdef-94ac802003a8 ']' 00:12:20.385 19:10:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:12:20.385 19:10:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:20.385 19:10:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:20.385 19:10:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 72734 00:12:20.385 19:10:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 72734 ']' 00:12:20.385 19:10:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 72734 00:12:20.385 19:10:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:12:20.385 19:10:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:20.385 19:10:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72734 00:12:20.385 killing process with pid 72734 00:12:20.385 19:10:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:20.385 19:10:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:20.385 19:10:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72734' 00:12:20.385 19:10:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 72734 00:12:20.385 [2024-11-27 19:10:29.997537] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:20.385 [2024-11-27 19:10:29.997637] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:20.385 19:10:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 72734 00:12:20.385 [2024-11-27 19:10:29.997735] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:20.385 [2024-11-27 19:10:29.997745] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:12:20.955 [2024-11-27 19:10:30.440435] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:22.342 19:10:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:12:22.342 00:12:22.342 real 0m5.738s 00:12:22.342 user 0m7.989s 00:12:22.342 sys 0m1.082s 00:12:22.342 ************************************ 00:12:22.342 END TEST raid_superblock_test 00:12:22.342 ************************************ 00:12:22.342 19:10:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:22.342 19:10:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.342 19:10:31 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 4 read 00:12:22.342 19:10:31 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:22.342 19:10:31 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:22.342 19:10:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:22.342 ************************************ 00:12:22.342 START TEST raid_read_error_test 00:12:22.342 ************************************ 00:12:22.342 19:10:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 4 read 00:12:22.342 19:10:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:12:22.342 19:10:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:12:22.342 19:10:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:12:22.342 19:10:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:22.342 19:10:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:22.342 19:10:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:22.342 19:10:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:22.342 19:10:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:22.342 19:10:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:22.342 19:10:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:22.342 19:10:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:22.342 19:10:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:12:22.342 19:10:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:22.342 19:10:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:22.342 19:10:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:12:22.342 19:10:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:22.342 19:10:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:22.342 19:10:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:22.342 19:10:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:22.342 19:10:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:22.342 19:10:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:22.342 19:10:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:22.342 19:10:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:22.342 19:10:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:22.342 19:10:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:12:22.342 19:10:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:12:22.342 19:10:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:12:22.342 19:10:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:22.342 19:10:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.DNfvVs9JdC 00:12:22.343 19:10:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=72995 00:12:22.343 19:10:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 72995 00:12:22.343 19:10:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:22.343 19:10:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 72995 ']' 00:12:22.343 19:10:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:22.343 19:10:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:22.343 19:10:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:22.343 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:22.343 19:10:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:22.343 19:10:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.343 [2024-11-27 19:10:31.852028] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:12:22.343 [2024-11-27 19:10:31.852145] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72995 ] 00:12:22.602 [2024-11-27 19:10:32.010749] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:22.602 [2024-11-27 19:10:32.143559] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:22.885 [2024-11-27 19:10:32.385810] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:22.885 [2024-11-27 19:10:32.385897] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:23.163 19:10:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:23.163 19:10:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:12:23.163 19:10:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:23.163 19:10:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:23.163 19:10:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.163 19:10:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.163 BaseBdev1_malloc 00:12:23.163 19:10:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.163 19:10:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:23.163 19:10:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.163 19:10:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.163 true 00:12:23.163 19:10:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.163 19:10:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:23.163 19:10:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.163 19:10:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.163 [2024-11-27 19:10:32.745436] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:23.163 [2024-11-27 19:10:32.745496] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:23.163 [2024-11-27 19:10:32.745516] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:23.163 [2024-11-27 19:10:32.745527] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:23.163 [2024-11-27 19:10:32.748002] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:23.163 [2024-11-27 19:10:32.748042] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:23.163 BaseBdev1 00:12:23.163 19:10:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.163 19:10:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:23.163 19:10:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:23.163 19:10:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.163 19:10:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.424 BaseBdev2_malloc 00:12:23.424 19:10:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.424 19:10:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:23.424 19:10:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.424 19:10:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.424 true 00:12:23.424 19:10:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.424 19:10:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:23.424 19:10:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.424 19:10:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.424 [2024-11-27 19:10:32.818425] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:23.424 [2024-11-27 19:10:32.818492] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:23.424 [2024-11-27 19:10:32.818508] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:23.424 [2024-11-27 19:10:32.818520] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:23.424 [2024-11-27 19:10:32.820905] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:23.424 [2024-11-27 19:10:32.820940] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:23.424 BaseBdev2 00:12:23.424 19:10:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.424 19:10:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:23.424 19:10:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:23.424 19:10:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.424 19:10:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.424 BaseBdev3_malloc 00:12:23.424 19:10:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.424 19:10:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:12:23.424 19:10:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.424 19:10:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.424 true 00:12:23.424 19:10:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.424 19:10:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:23.424 19:10:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.424 19:10:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.424 [2024-11-27 19:10:32.904800] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:23.424 [2024-11-27 19:10:32.904852] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:23.424 [2024-11-27 19:10:32.904870] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:12:23.424 [2024-11-27 19:10:32.904882] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:23.424 [2024-11-27 19:10:32.907346] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:23.424 [2024-11-27 19:10:32.907385] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:23.424 BaseBdev3 00:12:23.424 19:10:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.424 19:10:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:23.424 19:10:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:23.424 19:10:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.424 19:10:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.424 BaseBdev4_malloc 00:12:23.424 19:10:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.424 19:10:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:12:23.424 19:10:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.424 19:10:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.424 true 00:12:23.424 19:10:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.424 19:10:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:12:23.424 19:10:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.424 19:10:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.424 [2024-11-27 19:10:32.978878] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:12:23.425 [2024-11-27 19:10:32.978975] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:23.425 [2024-11-27 19:10:32.978997] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:23.425 [2024-11-27 19:10:32.979009] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:23.425 [2024-11-27 19:10:32.981349] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:23.425 [2024-11-27 19:10:32.981390] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:23.425 BaseBdev4 00:12:23.425 19:10:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.425 19:10:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:12:23.425 19:10:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.425 19:10:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.425 [2024-11-27 19:10:32.990932] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:23.425 [2024-11-27 19:10:32.993026] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:23.425 [2024-11-27 19:10:32.993102] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:23.425 [2024-11-27 19:10:32.993163] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:23.425 [2024-11-27 19:10:32.993401] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:12:23.425 [2024-11-27 19:10:32.993418] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:23.425 [2024-11-27 19:10:32.993657] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:12:23.425 [2024-11-27 19:10:32.993840] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:12:23.425 [2024-11-27 19:10:32.993853] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:12:23.425 [2024-11-27 19:10:32.994010] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:23.425 19:10:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.425 19:10:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:12:23.425 19:10:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:23.425 19:10:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:23.425 19:10:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:23.425 19:10:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:23.425 19:10:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:23.425 19:10:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:23.425 19:10:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:23.425 19:10:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:23.425 19:10:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:23.425 19:10:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:23.425 19:10:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:23.425 19:10:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.425 19:10:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.425 19:10:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.425 19:10:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:23.425 "name": "raid_bdev1", 00:12:23.425 "uuid": "02bb91c6-32dc-45b9-b4c5-5a0cb25afa78", 00:12:23.425 "strip_size_kb": 64, 00:12:23.425 "state": "online", 00:12:23.425 "raid_level": "concat", 00:12:23.425 "superblock": true, 00:12:23.425 "num_base_bdevs": 4, 00:12:23.425 "num_base_bdevs_discovered": 4, 00:12:23.425 "num_base_bdevs_operational": 4, 00:12:23.425 "base_bdevs_list": [ 00:12:23.425 { 00:12:23.425 "name": "BaseBdev1", 00:12:23.425 "uuid": "592a5db9-cef0-5a07-a8f8-177148944377", 00:12:23.425 "is_configured": true, 00:12:23.425 "data_offset": 2048, 00:12:23.425 "data_size": 63488 00:12:23.425 }, 00:12:23.425 { 00:12:23.425 "name": "BaseBdev2", 00:12:23.425 "uuid": "5d7053fb-5259-5227-ab82-b34cdcce8081", 00:12:23.425 "is_configured": true, 00:12:23.425 "data_offset": 2048, 00:12:23.425 "data_size": 63488 00:12:23.425 }, 00:12:23.425 { 00:12:23.425 "name": "BaseBdev3", 00:12:23.425 "uuid": "e4cf36fe-204d-5be5-84c0-7e2f7f0b1d03", 00:12:23.425 "is_configured": true, 00:12:23.425 "data_offset": 2048, 00:12:23.425 "data_size": 63488 00:12:23.425 }, 00:12:23.425 { 00:12:23.425 "name": "BaseBdev4", 00:12:23.425 "uuid": "37e0ca91-d048-5d55-9fa0-2d5ef0826144", 00:12:23.425 "is_configured": true, 00:12:23.425 "data_offset": 2048, 00:12:23.425 "data_size": 63488 00:12:23.425 } 00:12:23.425 ] 00:12:23.425 }' 00:12:23.425 19:10:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:23.425 19:10:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.995 19:10:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:23.995 19:10:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:23.995 [2024-11-27 19:10:33.531512] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:12:24.935 19:10:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:12:24.935 19:10:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.935 19:10:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.935 19:10:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.935 19:10:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:24.935 19:10:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:12:24.935 19:10:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:12:24.935 19:10:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:12:24.935 19:10:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:24.935 19:10:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:24.935 19:10:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:24.935 19:10:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:24.935 19:10:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:24.935 19:10:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:24.935 19:10:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:24.935 19:10:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:24.935 19:10:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:24.935 19:10:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:24.935 19:10:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:24.935 19:10:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.935 19:10:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.935 19:10:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.935 19:10:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:24.935 "name": "raid_bdev1", 00:12:24.935 "uuid": "02bb91c6-32dc-45b9-b4c5-5a0cb25afa78", 00:12:24.935 "strip_size_kb": 64, 00:12:24.935 "state": "online", 00:12:24.935 "raid_level": "concat", 00:12:24.935 "superblock": true, 00:12:24.935 "num_base_bdevs": 4, 00:12:24.935 "num_base_bdevs_discovered": 4, 00:12:24.935 "num_base_bdevs_operational": 4, 00:12:24.935 "base_bdevs_list": [ 00:12:24.935 { 00:12:24.935 "name": "BaseBdev1", 00:12:24.935 "uuid": "592a5db9-cef0-5a07-a8f8-177148944377", 00:12:24.935 "is_configured": true, 00:12:24.935 "data_offset": 2048, 00:12:24.935 "data_size": 63488 00:12:24.935 }, 00:12:24.935 { 00:12:24.935 "name": "BaseBdev2", 00:12:24.935 "uuid": "5d7053fb-5259-5227-ab82-b34cdcce8081", 00:12:24.935 "is_configured": true, 00:12:24.935 "data_offset": 2048, 00:12:24.935 "data_size": 63488 00:12:24.935 }, 00:12:24.935 { 00:12:24.935 "name": "BaseBdev3", 00:12:24.935 "uuid": "e4cf36fe-204d-5be5-84c0-7e2f7f0b1d03", 00:12:24.935 "is_configured": true, 00:12:24.935 "data_offset": 2048, 00:12:24.935 "data_size": 63488 00:12:24.935 }, 00:12:24.935 { 00:12:24.935 "name": "BaseBdev4", 00:12:24.935 "uuid": "37e0ca91-d048-5d55-9fa0-2d5ef0826144", 00:12:24.935 "is_configured": true, 00:12:24.935 "data_offset": 2048, 00:12:24.935 "data_size": 63488 00:12:24.935 } 00:12:24.935 ] 00:12:24.935 }' 00:12:24.935 19:10:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:24.935 19:10:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.505 19:10:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:25.505 19:10:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.505 19:10:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.506 [2024-11-27 19:10:34.872496] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:25.506 [2024-11-27 19:10:34.872626] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:25.506 [2024-11-27 19:10:34.875425] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:25.506 [2024-11-27 19:10:34.875537] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:25.506 [2024-11-27 19:10:34.875615] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:25.506 [2024-11-27 19:10:34.875667] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:12:25.506 { 00:12:25.506 "results": [ 00:12:25.506 { 00:12:25.506 "job": "raid_bdev1", 00:12:25.506 "core_mask": "0x1", 00:12:25.506 "workload": "randrw", 00:12:25.506 "percentage": 50, 00:12:25.506 "status": "finished", 00:12:25.506 "queue_depth": 1, 00:12:25.506 "io_size": 131072, 00:12:25.506 "runtime": 1.341721, 00:12:25.506 "iops": 13374.61364918638, 00:12:25.506 "mibps": 1671.8267061482975, 00:12:25.506 "io_failed": 1, 00:12:25.506 "io_timeout": 0, 00:12:25.506 "avg_latency_us": 105.07876467831443, 00:12:25.506 "min_latency_us": 25.2646288209607, 00:12:25.506 "max_latency_us": 1416.6078602620087 00:12:25.506 } 00:12:25.506 ], 00:12:25.506 "core_count": 1 00:12:25.506 } 00:12:25.506 19:10:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.506 19:10:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 72995 00:12:25.506 19:10:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 72995 ']' 00:12:25.506 19:10:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 72995 00:12:25.506 19:10:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:12:25.506 19:10:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:25.506 19:10:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72995 00:12:25.506 19:10:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:25.506 19:10:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:25.506 19:10:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72995' 00:12:25.506 killing process with pid 72995 00:12:25.506 19:10:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 72995 00:12:25.506 [2024-11-27 19:10:34.908992] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:25.506 19:10:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 72995 00:12:25.766 [2024-11-27 19:10:35.261144] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:27.147 19:10:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.DNfvVs9JdC 00:12:27.147 19:10:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:27.147 19:10:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:27.147 19:10:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.75 00:12:27.147 ************************************ 00:12:27.147 END TEST raid_read_error_test 00:12:27.147 ************************************ 00:12:27.147 19:10:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:12:27.147 19:10:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:27.147 19:10:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:27.147 19:10:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.75 != \0\.\0\0 ]] 00:12:27.147 00:12:27.147 real 0m4.816s 00:12:27.147 user 0m5.475s 00:12:27.147 sys 0m0.717s 00:12:27.147 19:10:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:27.147 19:10:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.147 19:10:36 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 4 write 00:12:27.147 19:10:36 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:27.147 19:10:36 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:27.147 19:10:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:27.147 ************************************ 00:12:27.147 START TEST raid_write_error_test 00:12:27.147 ************************************ 00:12:27.147 19:10:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 4 write 00:12:27.147 19:10:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:12:27.147 19:10:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:12:27.147 19:10:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:12:27.147 19:10:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:27.147 19:10:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:27.147 19:10:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:27.147 19:10:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:27.148 19:10:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:27.148 19:10:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:27.148 19:10:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:27.148 19:10:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:27.148 19:10:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:12:27.148 19:10:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:27.148 19:10:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:27.148 19:10:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:12:27.148 19:10:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:27.148 19:10:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:27.148 19:10:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:27.148 19:10:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:27.148 19:10:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:27.148 19:10:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:27.148 19:10:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:27.148 19:10:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:27.148 19:10:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:27.148 19:10:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:12:27.148 19:10:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:12:27.148 19:10:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:12:27.148 19:10:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:27.148 19:10:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.lL0K6c3yLl 00:12:27.148 19:10:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73141 00:12:27.148 19:10:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73141 00:12:27.148 19:10:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:27.148 19:10:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 73141 ']' 00:12:27.148 19:10:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:27.148 19:10:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:27.148 19:10:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:27.148 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:27.148 19:10:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:27.148 19:10:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.148 [2024-11-27 19:10:36.742597] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:12:27.148 [2024-11-27 19:10:36.742866] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73141 ] 00:12:27.408 [2024-11-27 19:10:36.917807] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:27.667 [2024-11-27 19:10:37.054931] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:27.667 [2024-11-27 19:10:37.284521] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:27.667 [2024-11-27 19:10:37.284656] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:28.237 19:10:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:28.237 19:10:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:12:28.237 19:10:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:28.237 19:10:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:28.237 19:10:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.237 19:10:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.237 BaseBdev1_malloc 00:12:28.237 19:10:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.237 19:10:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:28.237 19:10:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.237 19:10:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.237 true 00:12:28.237 19:10:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.237 19:10:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:28.237 19:10:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.237 19:10:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.237 [2024-11-27 19:10:37.641183] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:28.237 [2024-11-27 19:10:37.641287] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:28.237 [2024-11-27 19:10:37.641312] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:28.237 [2024-11-27 19:10:37.641324] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:28.237 [2024-11-27 19:10:37.643661] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:28.237 [2024-11-27 19:10:37.643713] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:28.237 BaseBdev1 00:12:28.237 19:10:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.237 19:10:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:28.237 19:10:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:28.237 19:10:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.237 19:10:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.237 BaseBdev2_malloc 00:12:28.237 19:10:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.237 19:10:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:28.237 19:10:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.237 19:10:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.237 true 00:12:28.237 19:10:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.237 19:10:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:28.237 19:10:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.237 19:10:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.237 [2024-11-27 19:10:37.712355] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:28.237 [2024-11-27 19:10:37.712413] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:28.237 [2024-11-27 19:10:37.712430] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:28.237 [2024-11-27 19:10:37.712442] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:28.237 [2024-11-27 19:10:37.714832] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:28.237 [2024-11-27 19:10:37.714923] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:28.237 BaseBdev2 00:12:28.237 19:10:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.237 19:10:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:28.237 19:10:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:28.237 19:10:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.237 19:10:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.237 BaseBdev3_malloc 00:12:28.237 19:10:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.237 19:10:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:12:28.237 19:10:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.237 19:10:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.237 true 00:12:28.237 19:10:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.237 19:10:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:28.237 19:10:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.237 19:10:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.237 [2024-11-27 19:10:37.816600] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:28.237 [2024-11-27 19:10:37.816664] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:28.237 [2024-11-27 19:10:37.816685] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:12:28.237 [2024-11-27 19:10:37.816710] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:28.237 [2024-11-27 19:10:37.819116] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:28.237 [2024-11-27 19:10:37.819202] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:28.237 BaseBdev3 00:12:28.237 19:10:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.237 19:10:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:28.237 19:10:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:28.237 19:10:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.237 19:10:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.497 BaseBdev4_malloc 00:12:28.497 19:10:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.497 19:10:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:12:28.497 19:10:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.497 19:10:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.497 true 00:12:28.497 19:10:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.497 19:10:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:12:28.497 19:10:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.497 19:10:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.497 [2024-11-27 19:10:37.891508] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:12:28.497 [2024-11-27 19:10:37.891567] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:28.497 [2024-11-27 19:10:37.891587] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:28.497 [2024-11-27 19:10:37.891599] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:28.497 [2024-11-27 19:10:37.893963] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:28.497 [2024-11-27 19:10:37.894001] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:28.497 BaseBdev4 00:12:28.497 19:10:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.497 19:10:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:12:28.497 19:10:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.497 19:10:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.497 [2024-11-27 19:10:37.903563] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:28.497 [2024-11-27 19:10:37.905729] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:28.497 [2024-11-27 19:10:37.905806] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:28.497 [2024-11-27 19:10:37.905868] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:28.497 [2024-11-27 19:10:37.906093] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:12:28.497 [2024-11-27 19:10:37.906115] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:28.497 [2024-11-27 19:10:37.906365] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:12:28.497 [2024-11-27 19:10:37.906539] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:12:28.497 [2024-11-27 19:10:37.906551] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:12:28.497 [2024-11-27 19:10:37.906799] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:28.497 19:10:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.498 19:10:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:12:28.498 19:10:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:28.498 19:10:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:28.498 19:10:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:28.498 19:10:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:28.498 19:10:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:28.498 19:10:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:28.498 19:10:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:28.498 19:10:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:28.498 19:10:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:28.498 19:10:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:28.498 19:10:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.498 19:10:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:28.498 19:10:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.498 19:10:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.498 19:10:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:28.498 "name": "raid_bdev1", 00:12:28.498 "uuid": "c96f6815-29f9-4686-a08a-0f5cd14c0697", 00:12:28.498 "strip_size_kb": 64, 00:12:28.498 "state": "online", 00:12:28.498 "raid_level": "concat", 00:12:28.498 "superblock": true, 00:12:28.498 "num_base_bdevs": 4, 00:12:28.498 "num_base_bdevs_discovered": 4, 00:12:28.498 "num_base_bdevs_operational": 4, 00:12:28.498 "base_bdevs_list": [ 00:12:28.498 { 00:12:28.498 "name": "BaseBdev1", 00:12:28.498 "uuid": "ac9b0d47-0ed4-5e24-b20d-0a9cb440528e", 00:12:28.498 "is_configured": true, 00:12:28.498 "data_offset": 2048, 00:12:28.498 "data_size": 63488 00:12:28.498 }, 00:12:28.498 { 00:12:28.498 "name": "BaseBdev2", 00:12:28.498 "uuid": "6c4eceff-dbc2-5052-b90f-95256f79abfc", 00:12:28.498 "is_configured": true, 00:12:28.498 "data_offset": 2048, 00:12:28.498 "data_size": 63488 00:12:28.498 }, 00:12:28.498 { 00:12:28.498 "name": "BaseBdev3", 00:12:28.498 "uuid": "43107c5a-a6be-59c1-846d-0e7d8788f57d", 00:12:28.498 "is_configured": true, 00:12:28.498 "data_offset": 2048, 00:12:28.498 "data_size": 63488 00:12:28.498 }, 00:12:28.498 { 00:12:28.498 "name": "BaseBdev4", 00:12:28.498 "uuid": "fcf4a367-a807-5a45-9e54-36d2219c9b02", 00:12:28.498 "is_configured": true, 00:12:28.498 "data_offset": 2048, 00:12:28.498 "data_size": 63488 00:12:28.498 } 00:12:28.498 ] 00:12:28.498 }' 00:12:28.498 19:10:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:28.498 19:10:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.758 19:10:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:28.758 19:10:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:29.018 [2024-11-27 19:10:38.428115] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:12:29.956 19:10:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:12:29.956 19:10:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.956 19:10:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.956 19:10:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.956 19:10:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:29.956 19:10:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:12:29.956 19:10:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:12:29.956 19:10:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:12:29.956 19:10:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:29.956 19:10:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:29.956 19:10:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:29.956 19:10:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:29.956 19:10:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:29.956 19:10:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:29.956 19:10:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:29.956 19:10:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:29.956 19:10:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:29.956 19:10:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:29.956 19:10:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.956 19:10:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:29.956 19:10:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.956 19:10:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.956 19:10:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:29.956 "name": "raid_bdev1", 00:12:29.956 "uuid": "c96f6815-29f9-4686-a08a-0f5cd14c0697", 00:12:29.956 "strip_size_kb": 64, 00:12:29.956 "state": "online", 00:12:29.956 "raid_level": "concat", 00:12:29.956 "superblock": true, 00:12:29.956 "num_base_bdevs": 4, 00:12:29.956 "num_base_bdevs_discovered": 4, 00:12:29.956 "num_base_bdevs_operational": 4, 00:12:29.956 "base_bdevs_list": [ 00:12:29.956 { 00:12:29.956 "name": "BaseBdev1", 00:12:29.956 "uuid": "ac9b0d47-0ed4-5e24-b20d-0a9cb440528e", 00:12:29.956 "is_configured": true, 00:12:29.956 "data_offset": 2048, 00:12:29.956 "data_size": 63488 00:12:29.956 }, 00:12:29.956 { 00:12:29.956 "name": "BaseBdev2", 00:12:29.956 "uuid": "6c4eceff-dbc2-5052-b90f-95256f79abfc", 00:12:29.956 "is_configured": true, 00:12:29.956 "data_offset": 2048, 00:12:29.956 "data_size": 63488 00:12:29.956 }, 00:12:29.956 { 00:12:29.956 "name": "BaseBdev3", 00:12:29.956 "uuid": "43107c5a-a6be-59c1-846d-0e7d8788f57d", 00:12:29.956 "is_configured": true, 00:12:29.956 "data_offset": 2048, 00:12:29.956 "data_size": 63488 00:12:29.956 }, 00:12:29.956 { 00:12:29.956 "name": "BaseBdev4", 00:12:29.956 "uuid": "fcf4a367-a807-5a45-9e54-36d2219c9b02", 00:12:29.956 "is_configured": true, 00:12:29.956 "data_offset": 2048, 00:12:29.956 "data_size": 63488 00:12:29.956 } 00:12:29.956 ] 00:12:29.956 }' 00:12:29.956 19:10:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:29.956 19:10:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.216 19:10:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:30.216 19:10:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.216 19:10:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.475 [2024-11-27 19:10:39.849148] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:30.475 [2024-11-27 19:10:39.849189] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:30.475 [2024-11-27 19:10:39.851869] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:30.475 [2024-11-27 19:10:39.851934] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:30.475 [2024-11-27 19:10:39.851983] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:30.475 [2024-11-27 19:10:39.852000] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:12:30.475 { 00:12:30.475 "results": [ 00:12:30.475 { 00:12:30.475 "job": "raid_bdev1", 00:12:30.475 "core_mask": "0x1", 00:12:30.475 "workload": "randrw", 00:12:30.475 "percentage": 50, 00:12:30.475 "status": "finished", 00:12:30.475 "queue_depth": 1, 00:12:30.475 "io_size": 131072, 00:12:30.475 "runtime": 1.421711, 00:12:30.475 "iops": 13332.526793420042, 00:12:30.475 "mibps": 1666.5658491775052, 00:12:30.475 "io_failed": 1, 00:12:30.475 "io_timeout": 0, 00:12:30.475 "avg_latency_us": 105.5750954405099, 00:12:30.475 "min_latency_us": 26.382532751091702, 00:12:30.475 "max_latency_us": 1387.989519650655 00:12:30.475 } 00:12:30.475 ], 00:12:30.475 "core_count": 1 00:12:30.475 } 00:12:30.475 19:10:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.475 19:10:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73141 00:12:30.475 19:10:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 73141 ']' 00:12:30.475 19:10:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 73141 00:12:30.475 19:10:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:12:30.475 19:10:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:30.475 19:10:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73141 00:12:30.475 19:10:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:30.475 19:10:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:30.475 killing process with pid 73141 00:12:30.475 19:10:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73141' 00:12:30.475 19:10:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 73141 00:12:30.475 [2024-11-27 19:10:39.898550] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:30.475 19:10:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 73141 00:12:30.735 [2024-11-27 19:10:40.252611] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:32.116 19:10:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:32.116 19:10:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.lL0K6c3yLl 00:12:32.116 19:10:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:32.116 19:10:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.70 00:12:32.116 19:10:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:12:32.117 19:10:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:32.117 19:10:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:32.117 ************************************ 00:12:32.117 END TEST raid_write_error_test 00:12:32.117 ************************************ 00:12:32.117 19:10:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.70 != \0\.\0\0 ]] 00:12:32.117 00:12:32.117 real 0m4.913s 00:12:32.117 user 0m5.645s 00:12:32.117 sys 0m0.724s 00:12:32.117 19:10:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:32.117 19:10:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.117 19:10:41 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:12:32.117 19:10:41 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:12:32.117 19:10:41 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:32.117 19:10:41 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:32.117 19:10:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:32.117 ************************************ 00:12:32.117 START TEST raid_state_function_test 00:12:32.117 ************************************ 00:12:32.117 19:10:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 4 false 00:12:32.117 19:10:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:12:32.117 19:10:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:12:32.117 19:10:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:12:32.117 19:10:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:32.117 19:10:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:32.117 19:10:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:32.117 19:10:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:32.117 19:10:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:32.117 19:10:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:32.117 19:10:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:32.117 19:10:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:32.117 19:10:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:32.117 19:10:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:12:32.117 19:10:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:32.117 19:10:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:32.117 19:10:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:12:32.117 19:10:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:32.117 19:10:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:32.117 19:10:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:32.117 19:10:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:32.117 19:10:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:32.117 19:10:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:32.117 19:10:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:32.117 19:10:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:32.117 19:10:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:12:32.117 19:10:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:12:32.117 19:10:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:12:32.117 19:10:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:12:32.117 19:10:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=73290 00:12:32.117 19:10:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:32.117 19:10:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73290' 00:12:32.117 Process raid pid: 73290 00:12:32.117 19:10:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 73290 00:12:32.117 19:10:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 73290 ']' 00:12:32.117 19:10:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:32.117 19:10:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:32.117 19:10:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:32.117 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:32.117 19:10:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:32.117 19:10:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.117 [2024-11-27 19:10:41.720741] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:12:32.117 [2024-11-27 19:10:41.720875] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:32.376 [2024-11-27 19:10:41.901072] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:32.636 [2024-11-27 19:10:42.041577] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:32.896 [2024-11-27 19:10:42.284716] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:32.896 [2024-11-27 19:10:42.284785] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:33.155 19:10:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:33.155 19:10:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:12:33.155 19:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:33.155 19:10:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.155 19:10:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.155 [2024-11-27 19:10:42.547938] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:33.155 [2024-11-27 19:10:42.547997] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:33.155 [2024-11-27 19:10:42.548014] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:33.155 [2024-11-27 19:10:42.548024] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:33.155 [2024-11-27 19:10:42.548030] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:33.155 [2024-11-27 19:10:42.548040] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:33.155 [2024-11-27 19:10:42.548046] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:33.155 [2024-11-27 19:10:42.548055] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:33.155 19:10:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.155 19:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:33.155 19:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:33.155 19:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:33.155 19:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:33.155 19:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:33.155 19:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:33.155 19:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:33.155 19:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:33.155 19:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:33.155 19:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:33.155 19:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:33.155 19:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:33.155 19:10:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.155 19:10:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.155 19:10:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.155 19:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:33.155 "name": "Existed_Raid", 00:12:33.155 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:33.155 "strip_size_kb": 0, 00:12:33.155 "state": "configuring", 00:12:33.155 "raid_level": "raid1", 00:12:33.155 "superblock": false, 00:12:33.155 "num_base_bdevs": 4, 00:12:33.155 "num_base_bdevs_discovered": 0, 00:12:33.155 "num_base_bdevs_operational": 4, 00:12:33.155 "base_bdevs_list": [ 00:12:33.155 { 00:12:33.155 "name": "BaseBdev1", 00:12:33.155 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:33.155 "is_configured": false, 00:12:33.155 "data_offset": 0, 00:12:33.155 "data_size": 0 00:12:33.155 }, 00:12:33.155 { 00:12:33.155 "name": "BaseBdev2", 00:12:33.155 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:33.155 "is_configured": false, 00:12:33.155 "data_offset": 0, 00:12:33.155 "data_size": 0 00:12:33.155 }, 00:12:33.155 { 00:12:33.155 "name": "BaseBdev3", 00:12:33.155 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:33.155 "is_configured": false, 00:12:33.155 "data_offset": 0, 00:12:33.155 "data_size": 0 00:12:33.155 }, 00:12:33.155 { 00:12:33.155 "name": "BaseBdev4", 00:12:33.155 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:33.155 "is_configured": false, 00:12:33.155 "data_offset": 0, 00:12:33.155 "data_size": 0 00:12:33.155 } 00:12:33.155 ] 00:12:33.155 }' 00:12:33.155 19:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:33.155 19:10:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.415 19:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:33.415 19:10:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.415 19:10:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.415 [2024-11-27 19:10:43.027092] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:33.415 [2024-11-27 19:10:43.027206] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:12:33.415 19:10:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.415 19:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:33.415 19:10:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.415 19:10:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.415 [2024-11-27 19:10:43.039044] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:33.415 [2024-11-27 19:10:43.039123] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:33.415 [2024-11-27 19:10:43.039163] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:33.415 [2024-11-27 19:10:43.039187] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:33.415 [2024-11-27 19:10:43.039216] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:33.415 [2024-11-27 19:10:43.039260] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:33.415 [2024-11-27 19:10:43.039285] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:33.415 [2024-11-27 19:10:43.039307] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:33.415 19:10:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.415 19:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:33.415 19:10:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.415 19:10:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.676 [2024-11-27 19:10:43.093250] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:33.676 BaseBdev1 00:12:33.676 19:10:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.676 19:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:33.676 19:10:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:33.676 19:10:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:33.676 19:10:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:33.676 19:10:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:33.676 19:10:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:33.676 19:10:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:33.676 19:10:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.676 19:10:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.676 19:10:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.676 19:10:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:33.676 19:10:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.676 19:10:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.676 [ 00:12:33.676 { 00:12:33.676 "name": "BaseBdev1", 00:12:33.676 "aliases": [ 00:12:33.676 "7e343c1b-8371-4c7a-b972-b55acdaf336a" 00:12:33.676 ], 00:12:33.676 "product_name": "Malloc disk", 00:12:33.676 "block_size": 512, 00:12:33.676 "num_blocks": 65536, 00:12:33.676 "uuid": "7e343c1b-8371-4c7a-b972-b55acdaf336a", 00:12:33.676 "assigned_rate_limits": { 00:12:33.676 "rw_ios_per_sec": 0, 00:12:33.676 "rw_mbytes_per_sec": 0, 00:12:33.676 "r_mbytes_per_sec": 0, 00:12:33.676 "w_mbytes_per_sec": 0 00:12:33.676 }, 00:12:33.676 "claimed": true, 00:12:33.676 "claim_type": "exclusive_write", 00:12:33.676 "zoned": false, 00:12:33.676 "supported_io_types": { 00:12:33.676 "read": true, 00:12:33.676 "write": true, 00:12:33.676 "unmap": true, 00:12:33.676 "flush": true, 00:12:33.676 "reset": true, 00:12:33.676 "nvme_admin": false, 00:12:33.676 "nvme_io": false, 00:12:33.676 "nvme_io_md": false, 00:12:33.676 "write_zeroes": true, 00:12:33.676 "zcopy": true, 00:12:33.676 "get_zone_info": false, 00:12:33.676 "zone_management": false, 00:12:33.676 "zone_append": false, 00:12:33.676 "compare": false, 00:12:33.676 "compare_and_write": false, 00:12:33.676 "abort": true, 00:12:33.676 "seek_hole": false, 00:12:33.676 "seek_data": false, 00:12:33.676 "copy": true, 00:12:33.676 "nvme_iov_md": false 00:12:33.676 }, 00:12:33.676 "memory_domains": [ 00:12:33.676 { 00:12:33.676 "dma_device_id": "system", 00:12:33.676 "dma_device_type": 1 00:12:33.676 }, 00:12:33.676 { 00:12:33.676 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:33.676 "dma_device_type": 2 00:12:33.676 } 00:12:33.676 ], 00:12:33.676 "driver_specific": {} 00:12:33.676 } 00:12:33.676 ] 00:12:33.676 19:10:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.676 19:10:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:33.676 19:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:33.676 19:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:33.676 19:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:33.676 19:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:33.676 19:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:33.676 19:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:33.676 19:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:33.676 19:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:33.676 19:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:33.676 19:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:33.676 19:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:33.676 19:10:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.676 19:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:33.676 19:10:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.676 19:10:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.676 19:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:33.676 "name": "Existed_Raid", 00:12:33.676 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:33.676 "strip_size_kb": 0, 00:12:33.676 "state": "configuring", 00:12:33.676 "raid_level": "raid1", 00:12:33.676 "superblock": false, 00:12:33.676 "num_base_bdevs": 4, 00:12:33.676 "num_base_bdevs_discovered": 1, 00:12:33.676 "num_base_bdevs_operational": 4, 00:12:33.676 "base_bdevs_list": [ 00:12:33.676 { 00:12:33.676 "name": "BaseBdev1", 00:12:33.676 "uuid": "7e343c1b-8371-4c7a-b972-b55acdaf336a", 00:12:33.676 "is_configured": true, 00:12:33.676 "data_offset": 0, 00:12:33.676 "data_size": 65536 00:12:33.676 }, 00:12:33.676 { 00:12:33.676 "name": "BaseBdev2", 00:12:33.676 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:33.676 "is_configured": false, 00:12:33.676 "data_offset": 0, 00:12:33.676 "data_size": 0 00:12:33.676 }, 00:12:33.676 { 00:12:33.676 "name": "BaseBdev3", 00:12:33.676 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:33.676 "is_configured": false, 00:12:33.676 "data_offset": 0, 00:12:33.676 "data_size": 0 00:12:33.676 }, 00:12:33.676 { 00:12:33.676 "name": "BaseBdev4", 00:12:33.676 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:33.676 "is_configured": false, 00:12:33.676 "data_offset": 0, 00:12:33.676 "data_size": 0 00:12:33.676 } 00:12:33.676 ] 00:12:33.676 }' 00:12:33.676 19:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:33.676 19:10:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.247 19:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:34.247 19:10:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.247 19:10:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.247 [2024-11-27 19:10:43.600451] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:34.247 [2024-11-27 19:10:43.600519] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:12:34.247 19:10:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.247 19:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:34.247 19:10:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.247 19:10:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.247 [2024-11-27 19:10:43.608486] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:34.247 [2024-11-27 19:10:43.610584] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:34.247 [2024-11-27 19:10:43.610668] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:34.247 [2024-11-27 19:10:43.610697] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:34.247 [2024-11-27 19:10:43.610718] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:34.247 [2024-11-27 19:10:43.610725] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:34.247 [2024-11-27 19:10:43.610734] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:34.247 19:10:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.247 19:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:34.247 19:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:34.247 19:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:34.247 19:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:34.247 19:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:34.247 19:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:34.247 19:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:34.247 19:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:34.247 19:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:34.247 19:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:34.247 19:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:34.247 19:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:34.247 19:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:34.247 19:10:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.247 19:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:34.247 19:10:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.247 19:10:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.247 19:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:34.247 "name": "Existed_Raid", 00:12:34.247 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:34.247 "strip_size_kb": 0, 00:12:34.247 "state": "configuring", 00:12:34.247 "raid_level": "raid1", 00:12:34.247 "superblock": false, 00:12:34.247 "num_base_bdevs": 4, 00:12:34.247 "num_base_bdevs_discovered": 1, 00:12:34.247 "num_base_bdevs_operational": 4, 00:12:34.247 "base_bdevs_list": [ 00:12:34.247 { 00:12:34.247 "name": "BaseBdev1", 00:12:34.247 "uuid": "7e343c1b-8371-4c7a-b972-b55acdaf336a", 00:12:34.247 "is_configured": true, 00:12:34.247 "data_offset": 0, 00:12:34.247 "data_size": 65536 00:12:34.247 }, 00:12:34.247 { 00:12:34.247 "name": "BaseBdev2", 00:12:34.247 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:34.247 "is_configured": false, 00:12:34.247 "data_offset": 0, 00:12:34.247 "data_size": 0 00:12:34.247 }, 00:12:34.247 { 00:12:34.247 "name": "BaseBdev3", 00:12:34.247 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:34.247 "is_configured": false, 00:12:34.247 "data_offset": 0, 00:12:34.247 "data_size": 0 00:12:34.247 }, 00:12:34.247 { 00:12:34.247 "name": "BaseBdev4", 00:12:34.247 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:34.247 "is_configured": false, 00:12:34.247 "data_offset": 0, 00:12:34.247 "data_size": 0 00:12:34.247 } 00:12:34.247 ] 00:12:34.247 }' 00:12:34.247 19:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:34.247 19:10:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.508 19:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:34.508 19:10:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.508 19:10:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.508 [2024-11-27 19:10:44.082675] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:34.508 BaseBdev2 00:12:34.508 19:10:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.508 19:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:34.508 19:10:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:34.508 19:10:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:34.508 19:10:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:34.508 19:10:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:34.508 19:10:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:34.508 19:10:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:34.508 19:10:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.508 19:10:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.508 19:10:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.508 19:10:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:34.508 19:10:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.508 19:10:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.508 [ 00:12:34.508 { 00:12:34.508 "name": "BaseBdev2", 00:12:34.508 "aliases": [ 00:12:34.508 "0f05fe4a-fbfe-456f-9461-9543bbf7962b" 00:12:34.508 ], 00:12:34.508 "product_name": "Malloc disk", 00:12:34.508 "block_size": 512, 00:12:34.508 "num_blocks": 65536, 00:12:34.508 "uuid": "0f05fe4a-fbfe-456f-9461-9543bbf7962b", 00:12:34.508 "assigned_rate_limits": { 00:12:34.508 "rw_ios_per_sec": 0, 00:12:34.508 "rw_mbytes_per_sec": 0, 00:12:34.508 "r_mbytes_per_sec": 0, 00:12:34.508 "w_mbytes_per_sec": 0 00:12:34.508 }, 00:12:34.508 "claimed": true, 00:12:34.508 "claim_type": "exclusive_write", 00:12:34.508 "zoned": false, 00:12:34.508 "supported_io_types": { 00:12:34.508 "read": true, 00:12:34.508 "write": true, 00:12:34.508 "unmap": true, 00:12:34.508 "flush": true, 00:12:34.508 "reset": true, 00:12:34.508 "nvme_admin": false, 00:12:34.508 "nvme_io": false, 00:12:34.508 "nvme_io_md": false, 00:12:34.508 "write_zeroes": true, 00:12:34.508 "zcopy": true, 00:12:34.508 "get_zone_info": false, 00:12:34.508 "zone_management": false, 00:12:34.508 "zone_append": false, 00:12:34.508 "compare": false, 00:12:34.508 "compare_and_write": false, 00:12:34.508 "abort": true, 00:12:34.508 "seek_hole": false, 00:12:34.508 "seek_data": false, 00:12:34.508 "copy": true, 00:12:34.508 "nvme_iov_md": false 00:12:34.508 }, 00:12:34.508 "memory_domains": [ 00:12:34.508 { 00:12:34.508 "dma_device_id": "system", 00:12:34.508 "dma_device_type": 1 00:12:34.508 }, 00:12:34.508 { 00:12:34.508 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:34.508 "dma_device_type": 2 00:12:34.508 } 00:12:34.508 ], 00:12:34.508 "driver_specific": {} 00:12:34.508 } 00:12:34.508 ] 00:12:34.508 19:10:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.508 19:10:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:34.508 19:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:34.508 19:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:34.508 19:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:34.508 19:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:34.508 19:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:34.508 19:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:34.508 19:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:34.508 19:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:34.508 19:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:34.508 19:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:34.508 19:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:34.508 19:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:34.508 19:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:34.508 19:10:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.508 19:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:34.508 19:10:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.769 19:10:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.769 19:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:34.769 "name": "Existed_Raid", 00:12:34.769 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:34.769 "strip_size_kb": 0, 00:12:34.769 "state": "configuring", 00:12:34.769 "raid_level": "raid1", 00:12:34.769 "superblock": false, 00:12:34.769 "num_base_bdevs": 4, 00:12:34.769 "num_base_bdevs_discovered": 2, 00:12:34.769 "num_base_bdevs_operational": 4, 00:12:34.769 "base_bdevs_list": [ 00:12:34.769 { 00:12:34.769 "name": "BaseBdev1", 00:12:34.769 "uuid": "7e343c1b-8371-4c7a-b972-b55acdaf336a", 00:12:34.769 "is_configured": true, 00:12:34.769 "data_offset": 0, 00:12:34.769 "data_size": 65536 00:12:34.769 }, 00:12:34.769 { 00:12:34.769 "name": "BaseBdev2", 00:12:34.769 "uuid": "0f05fe4a-fbfe-456f-9461-9543bbf7962b", 00:12:34.769 "is_configured": true, 00:12:34.769 "data_offset": 0, 00:12:34.769 "data_size": 65536 00:12:34.769 }, 00:12:34.769 { 00:12:34.769 "name": "BaseBdev3", 00:12:34.769 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:34.769 "is_configured": false, 00:12:34.769 "data_offset": 0, 00:12:34.769 "data_size": 0 00:12:34.769 }, 00:12:34.769 { 00:12:34.769 "name": "BaseBdev4", 00:12:34.769 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:34.769 "is_configured": false, 00:12:34.769 "data_offset": 0, 00:12:34.769 "data_size": 0 00:12:34.769 } 00:12:34.769 ] 00:12:34.769 }' 00:12:34.769 19:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:34.769 19:10:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.029 19:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:35.029 19:10:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.029 19:10:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.029 [2024-11-27 19:10:44.650014] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:35.029 BaseBdev3 00:12:35.029 19:10:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.029 19:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:12:35.029 19:10:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:35.029 19:10:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:35.029 19:10:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:35.029 19:10:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:35.029 19:10:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:35.029 19:10:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:35.029 19:10:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.029 19:10:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.029 19:10:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.029 19:10:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:35.289 19:10:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.289 19:10:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.289 [ 00:12:35.289 { 00:12:35.289 "name": "BaseBdev3", 00:12:35.289 "aliases": [ 00:12:35.289 "8350ba76-e835-4290-bfa0-caf6824da2b4" 00:12:35.289 ], 00:12:35.289 "product_name": "Malloc disk", 00:12:35.289 "block_size": 512, 00:12:35.289 "num_blocks": 65536, 00:12:35.289 "uuid": "8350ba76-e835-4290-bfa0-caf6824da2b4", 00:12:35.289 "assigned_rate_limits": { 00:12:35.289 "rw_ios_per_sec": 0, 00:12:35.289 "rw_mbytes_per_sec": 0, 00:12:35.289 "r_mbytes_per_sec": 0, 00:12:35.289 "w_mbytes_per_sec": 0 00:12:35.289 }, 00:12:35.289 "claimed": true, 00:12:35.289 "claim_type": "exclusive_write", 00:12:35.289 "zoned": false, 00:12:35.289 "supported_io_types": { 00:12:35.289 "read": true, 00:12:35.289 "write": true, 00:12:35.289 "unmap": true, 00:12:35.289 "flush": true, 00:12:35.289 "reset": true, 00:12:35.289 "nvme_admin": false, 00:12:35.289 "nvme_io": false, 00:12:35.289 "nvme_io_md": false, 00:12:35.289 "write_zeroes": true, 00:12:35.289 "zcopy": true, 00:12:35.289 "get_zone_info": false, 00:12:35.289 "zone_management": false, 00:12:35.289 "zone_append": false, 00:12:35.289 "compare": false, 00:12:35.289 "compare_and_write": false, 00:12:35.289 "abort": true, 00:12:35.289 "seek_hole": false, 00:12:35.289 "seek_data": false, 00:12:35.289 "copy": true, 00:12:35.289 "nvme_iov_md": false 00:12:35.289 }, 00:12:35.289 "memory_domains": [ 00:12:35.289 { 00:12:35.289 "dma_device_id": "system", 00:12:35.289 "dma_device_type": 1 00:12:35.289 }, 00:12:35.289 { 00:12:35.289 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:35.289 "dma_device_type": 2 00:12:35.289 } 00:12:35.289 ], 00:12:35.289 "driver_specific": {} 00:12:35.289 } 00:12:35.289 ] 00:12:35.289 19:10:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.289 19:10:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:35.289 19:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:35.289 19:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:35.289 19:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:35.289 19:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:35.289 19:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:35.289 19:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:35.289 19:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:35.289 19:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:35.289 19:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:35.289 19:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:35.289 19:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:35.289 19:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:35.289 19:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:35.289 19:10:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.289 19:10:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.289 19:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:35.289 19:10:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.289 19:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:35.289 "name": "Existed_Raid", 00:12:35.289 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:35.289 "strip_size_kb": 0, 00:12:35.289 "state": "configuring", 00:12:35.289 "raid_level": "raid1", 00:12:35.289 "superblock": false, 00:12:35.289 "num_base_bdevs": 4, 00:12:35.289 "num_base_bdevs_discovered": 3, 00:12:35.290 "num_base_bdevs_operational": 4, 00:12:35.290 "base_bdevs_list": [ 00:12:35.290 { 00:12:35.290 "name": "BaseBdev1", 00:12:35.290 "uuid": "7e343c1b-8371-4c7a-b972-b55acdaf336a", 00:12:35.290 "is_configured": true, 00:12:35.290 "data_offset": 0, 00:12:35.290 "data_size": 65536 00:12:35.290 }, 00:12:35.290 { 00:12:35.290 "name": "BaseBdev2", 00:12:35.290 "uuid": "0f05fe4a-fbfe-456f-9461-9543bbf7962b", 00:12:35.290 "is_configured": true, 00:12:35.290 "data_offset": 0, 00:12:35.290 "data_size": 65536 00:12:35.290 }, 00:12:35.290 { 00:12:35.290 "name": "BaseBdev3", 00:12:35.290 "uuid": "8350ba76-e835-4290-bfa0-caf6824da2b4", 00:12:35.290 "is_configured": true, 00:12:35.290 "data_offset": 0, 00:12:35.290 "data_size": 65536 00:12:35.290 }, 00:12:35.290 { 00:12:35.290 "name": "BaseBdev4", 00:12:35.290 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:35.290 "is_configured": false, 00:12:35.290 "data_offset": 0, 00:12:35.290 "data_size": 0 00:12:35.290 } 00:12:35.290 ] 00:12:35.290 }' 00:12:35.290 19:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:35.290 19:10:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.550 19:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:35.550 19:10:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.550 19:10:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.810 [2024-11-27 19:10:45.198296] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:35.810 [2024-11-27 19:10:45.198442] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:35.810 [2024-11-27 19:10:45.198468] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:12:35.810 [2024-11-27 19:10:45.198823] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:35.810 [2024-11-27 19:10:45.199065] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:35.810 [2024-11-27 19:10:45.199112] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:12:35.810 [2024-11-27 19:10:45.199452] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:35.810 BaseBdev4 00:12:35.810 19:10:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.810 19:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:12:35.810 19:10:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:12:35.810 19:10:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:35.810 19:10:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:35.810 19:10:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:35.810 19:10:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:35.810 19:10:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:35.810 19:10:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.810 19:10:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.810 19:10:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.810 19:10:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:35.810 19:10:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.810 19:10:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.810 [ 00:12:35.810 { 00:12:35.810 "name": "BaseBdev4", 00:12:35.810 "aliases": [ 00:12:35.810 "3335390d-5bd7-48ed-9403-447155485852" 00:12:35.810 ], 00:12:35.810 "product_name": "Malloc disk", 00:12:35.810 "block_size": 512, 00:12:35.810 "num_blocks": 65536, 00:12:35.810 "uuid": "3335390d-5bd7-48ed-9403-447155485852", 00:12:35.810 "assigned_rate_limits": { 00:12:35.810 "rw_ios_per_sec": 0, 00:12:35.810 "rw_mbytes_per_sec": 0, 00:12:35.810 "r_mbytes_per_sec": 0, 00:12:35.810 "w_mbytes_per_sec": 0 00:12:35.810 }, 00:12:35.810 "claimed": true, 00:12:35.810 "claim_type": "exclusive_write", 00:12:35.810 "zoned": false, 00:12:35.810 "supported_io_types": { 00:12:35.810 "read": true, 00:12:35.810 "write": true, 00:12:35.810 "unmap": true, 00:12:35.810 "flush": true, 00:12:35.810 "reset": true, 00:12:35.810 "nvme_admin": false, 00:12:35.810 "nvme_io": false, 00:12:35.810 "nvme_io_md": false, 00:12:35.810 "write_zeroes": true, 00:12:35.810 "zcopy": true, 00:12:35.810 "get_zone_info": false, 00:12:35.810 "zone_management": false, 00:12:35.810 "zone_append": false, 00:12:35.810 "compare": false, 00:12:35.810 "compare_and_write": false, 00:12:35.810 "abort": true, 00:12:35.810 "seek_hole": false, 00:12:35.810 "seek_data": false, 00:12:35.810 "copy": true, 00:12:35.810 "nvme_iov_md": false 00:12:35.810 }, 00:12:35.810 "memory_domains": [ 00:12:35.810 { 00:12:35.810 "dma_device_id": "system", 00:12:35.810 "dma_device_type": 1 00:12:35.810 }, 00:12:35.810 { 00:12:35.810 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:35.810 "dma_device_type": 2 00:12:35.810 } 00:12:35.810 ], 00:12:35.810 "driver_specific": {} 00:12:35.810 } 00:12:35.810 ] 00:12:35.810 19:10:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.810 19:10:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:35.810 19:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:35.810 19:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:35.810 19:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:12:35.810 19:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:35.810 19:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:35.810 19:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:35.810 19:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:35.810 19:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:35.810 19:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:35.810 19:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:35.810 19:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:35.810 19:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:35.810 19:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:35.810 19:10:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.810 19:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:35.810 19:10:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.810 19:10:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.810 19:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:35.810 "name": "Existed_Raid", 00:12:35.810 "uuid": "d7ba0202-8de3-4fbd-9a36-b5875813e2d9", 00:12:35.810 "strip_size_kb": 0, 00:12:35.810 "state": "online", 00:12:35.810 "raid_level": "raid1", 00:12:35.810 "superblock": false, 00:12:35.810 "num_base_bdevs": 4, 00:12:35.810 "num_base_bdevs_discovered": 4, 00:12:35.810 "num_base_bdevs_operational": 4, 00:12:35.810 "base_bdevs_list": [ 00:12:35.810 { 00:12:35.810 "name": "BaseBdev1", 00:12:35.810 "uuid": "7e343c1b-8371-4c7a-b972-b55acdaf336a", 00:12:35.810 "is_configured": true, 00:12:35.810 "data_offset": 0, 00:12:35.810 "data_size": 65536 00:12:35.810 }, 00:12:35.810 { 00:12:35.810 "name": "BaseBdev2", 00:12:35.810 "uuid": "0f05fe4a-fbfe-456f-9461-9543bbf7962b", 00:12:35.810 "is_configured": true, 00:12:35.810 "data_offset": 0, 00:12:35.810 "data_size": 65536 00:12:35.810 }, 00:12:35.810 { 00:12:35.810 "name": "BaseBdev3", 00:12:35.810 "uuid": "8350ba76-e835-4290-bfa0-caf6824da2b4", 00:12:35.810 "is_configured": true, 00:12:35.810 "data_offset": 0, 00:12:35.810 "data_size": 65536 00:12:35.810 }, 00:12:35.810 { 00:12:35.810 "name": "BaseBdev4", 00:12:35.810 "uuid": "3335390d-5bd7-48ed-9403-447155485852", 00:12:35.811 "is_configured": true, 00:12:35.811 "data_offset": 0, 00:12:35.811 "data_size": 65536 00:12:35.811 } 00:12:35.811 ] 00:12:35.811 }' 00:12:35.811 19:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:35.811 19:10:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.070 19:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:12:36.070 19:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:36.070 19:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:36.070 19:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:36.070 19:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:36.070 19:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:36.070 19:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:36.070 19:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:36.070 19:10:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.071 19:10:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.071 [2024-11-27 19:10:45.670020] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:36.071 19:10:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.331 19:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:36.331 "name": "Existed_Raid", 00:12:36.331 "aliases": [ 00:12:36.331 "d7ba0202-8de3-4fbd-9a36-b5875813e2d9" 00:12:36.331 ], 00:12:36.331 "product_name": "Raid Volume", 00:12:36.331 "block_size": 512, 00:12:36.331 "num_blocks": 65536, 00:12:36.331 "uuid": "d7ba0202-8de3-4fbd-9a36-b5875813e2d9", 00:12:36.331 "assigned_rate_limits": { 00:12:36.331 "rw_ios_per_sec": 0, 00:12:36.331 "rw_mbytes_per_sec": 0, 00:12:36.331 "r_mbytes_per_sec": 0, 00:12:36.331 "w_mbytes_per_sec": 0 00:12:36.331 }, 00:12:36.331 "claimed": false, 00:12:36.331 "zoned": false, 00:12:36.331 "supported_io_types": { 00:12:36.331 "read": true, 00:12:36.331 "write": true, 00:12:36.331 "unmap": false, 00:12:36.331 "flush": false, 00:12:36.331 "reset": true, 00:12:36.331 "nvme_admin": false, 00:12:36.331 "nvme_io": false, 00:12:36.331 "nvme_io_md": false, 00:12:36.331 "write_zeroes": true, 00:12:36.331 "zcopy": false, 00:12:36.331 "get_zone_info": false, 00:12:36.331 "zone_management": false, 00:12:36.331 "zone_append": false, 00:12:36.331 "compare": false, 00:12:36.331 "compare_and_write": false, 00:12:36.331 "abort": false, 00:12:36.331 "seek_hole": false, 00:12:36.331 "seek_data": false, 00:12:36.331 "copy": false, 00:12:36.331 "nvme_iov_md": false 00:12:36.331 }, 00:12:36.331 "memory_domains": [ 00:12:36.331 { 00:12:36.331 "dma_device_id": "system", 00:12:36.331 "dma_device_type": 1 00:12:36.331 }, 00:12:36.331 { 00:12:36.331 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:36.331 "dma_device_type": 2 00:12:36.331 }, 00:12:36.331 { 00:12:36.331 "dma_device_id": "system", 00:12:36.331 "dma_device_type": 1 00:12:36.331 }, 00:12:36.331 { 00:12:36.331 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:36.331 "dma_device_type": 2 00:12:36.331 }, 00:12:36.331 { 00:12:36.331 "dma_device_id": "system", 00:12:36.331 "dma_device_type": 1 00:12:36.331 }, 00:12:36.331 { 00:12:36.331 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:36.331 "dma_device_type": 2 00:12:36.331 }, 00:12:36.331 { 00:12:36.331 "dma_device_id": "system", 00:12:36.331 "dma_device_type": 1 00:12:36.331 }, 00:12:36.331 { 00:12:36.331 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:36.331 "dma_device_type": 2 00:12:36.331 } 00:12:36.331 ], 00:12:36.331 "driver_specific": { 00:12:36.331 "raid": { 00:12:36.331 "uuid": "d7ba0202-8de3-4fbd-9a36-b5875813e2d9", 00:12:36.331 "strip_size_kb": 0, 00:12:36.331 "state": "online", 00:12:36.331 "raid_level": "raid1", 00:12:36.331 "superblock": false, 00:12:36.331 "num_base_bdevs": 4, 00:12:36.331 "num_base_bdevs_discovered": 4, 00:12:36.331 "num_base_bdevs_operational": 4, 00:12:36.331 "base_bdevs_list": [ 00:12:36.331 { 00:12:36.331 "name": "BaseBdev1", 00:12:36.331 "uuid": "7e343c1b-8371-4c7a-b972-b55acdaf336a", 00:12:36.331 "is_configured": true, 00:12:36.331 "data_offset": 0, 00:12:36.331 "data_size": 65536 00:12:36.331 }, 00:12:36.331 { 00:12:36.331 "name": "BaseBdev2", 00:12:36.331 "uuid": "0f05fe4a-fbfe-456f-9461-9543bbf7962b", 00:12:36.331 "is_configured": true, 00:12:36.331 "data_offset": 0, 00:12:36.331 "data_size": 65536 00:12:36.331 }, 00:12:36.331 { 00:12:36.331 "name": "BaseBdev3", 00:12:36.331 "uuid": "8350ba76-e835-4290-bfa0-caf6824da2b4", 00:12:36.331 "is_configured": true, 00:12:36.331 "data_offset": 0, 00:12:36.331 "data_size": 65536 00:12:36.331 }, 00:12:36.331 { 00:12:36.331 "name": "BaseBdev4", 00:12:36.331 "uuid": "3335390d-5bd7-48ed-9403-447155485852", 00:12:36.331 "is_configured": true, 00:12:36.331 "data_offset": 0, 00:12:36.331 "data_size": 65536 00:12:36.331 } 00:12:36.331 ] 00:12:36.331 } 00:12:36.331 } 00:12:36.331 }' 00:12:36.331 19:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:36.331 19:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:36.331 BaseBdev2 00:12:36.331 BaseBdev3 00:12:36.331 BaseBdev4' 00:12:36.331 19:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:36.331 19:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:36.331 19:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:36.332 19:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:36.332 19:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:36.332 19:10:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.332 19:10:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.332 19:10:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.332 19:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:36.332 19:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:36.332 19:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:36.332 19:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:36.332 19:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:36.332 19:10:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.332 19:10:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.332 19:10:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.332 19:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:36.332 19:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:36.332 19:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:36.332 19:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:36.332 19:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:36.332 19:10:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.332 19:10:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.332 19:10:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.332 19:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:36.332 19:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:36.332 19:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:36.332 19:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:36.332 19:10:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.332 19:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:36.332 19:10:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.592 19:10:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.592 19:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:36.592 19:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:36.592 19:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:36.592 19:10:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.592 19:10:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.592 [2024-11-27 19:10:46.009067] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:36.592 19:10:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.592 19:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:36.592 19:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:12:36.592 19:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:36.592 19:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:36.592 19:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:12:36.592 19:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:12:36.592 19:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:36.592 19:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:36.592 19:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:36.592 19:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:36.592 19:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:36.592 19:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:36.592 19:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:36.592 19:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:36.592 19:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:36.592 19:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:36.592 19:10:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.592 19:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:36.592 19:10:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.592 19:10:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.592 19:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:36.592 "name": "Existed_Raid", 00:12:36.592 "uuid": "d7ba0202-8de3-4fbd-9a36-b5875813e2d9", 00:12:36.592 "strip_size_kb": 0, 00:12:36.592 "state": "online", 00:12:36.592 "raid_level": "raid1", 00:12:36.592 "superblock": false, 00:12:36.592 "num_base_bdevs": 4, 00:12:36.592 "num_base_bdevs_discovered": 3, 00:12:36.592 "num_base_bdevs_operational": 3, 00:12:36.592 "base_bdevs_list": [ 00:12:36.592 { 00:12:36.592 "name": null, 00:12:36.592 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:36.592 "is_configured": false, 00:12:36.592 "data_offset": 0, 00:12:36.592 "data_size": 65536 00:12:36.592 }, 00:12:36.592 { 00:12:36.592 "name": "BaseBdev2", 00:12:36.592 "uuid": "0f05fe4a-fbfe-456f-9461-9543bbf7962b", 00:12:36.592 "is_configured": true, 00:12:36.592 "data_offset": 0, 00:12:36.592 "data_size": 65536 00:12:36.592 }, 00:12:36.592 { 00:12:36.592 "name": "BaseBdev3", 00:12:36.592 "uuid": "8350ba76-e835-4290-bfa0-caf6824da2b4", 00:12:36.592 "is_configured": true, 00:12:36.592 "data_offset": 0, 00:12:36.592 "data_size": 65536 00:12:36.592 }, 00:12:36.592 { 00:12:36.592 "name": "BaseBdev4", 00:12:36.592 "uuid": "3335390d-5bd7-48ed-9403-447155485852", 00:12:36.592 "is_configured": true, 00:12:36.592 "data_offset": 0, 00:12:36.592 "data_size": 65536 00:12:36.592 } 00:12:36.592 ] 00:12:36.592 }' 00:12:36.592 19:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:36.592 19:10:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.162 19:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:37.162 19:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:37.162 19:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:37.162 19:10:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.162 19:10:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.162 19:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:37.162 19:10:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.162 19:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:37.162 19:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:37.162 19:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:37.162 19:10:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.162 19:10:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.162 [2024-11-27 19:10:46.641588] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:37.163 19:10:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.163 19:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:37.163 19:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:37.163 19:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:37.163 19:10:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.163 19:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:37.163 19:10:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.163 19:10:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.423 19:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:37.423 19:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:37.423 19:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:12:37.423 19:10:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.423 19:10:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.423 [2024-11-27 19:10:46.807484] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:37.423 19:10:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.423 19:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:37.423 19:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:37.423 19:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:37.423 19:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:37.423 19:10:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.423 19:10:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.423 19:10:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.423 19:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:37.423 19:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:37.423 19:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:12:37.423 19:10:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.423 19:10:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.423 [2024-11-27 19:10:46.967362] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:12:37.423 [2024-11-27 19:10:46.967475] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:37.683 [2024-11-27 19:10:47.070725] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:37.683 [2024-11-27 19:10:47.070787] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:37.683 [2024-11-27 19:10:47.070799] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:12:37.683 19:10:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.683 19:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:37.683 19:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:37.683 19:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:37.683 19:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:37.683 19:10:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.683 19:10:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.683 19:10:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.683 19:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:37.683 19:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:37.683 19:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:12:37.683 19:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:12:37.683 19:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:37.683 19:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:37.683 19:10:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.683 19:10:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.683 BaseBdev2 00:12:37.683 19:10:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.683 19:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:12:37.683 19:10:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:37.683 19:10:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:37.683 19:10:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:37.683 19:10:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:37.683 19:10:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:37.683 19:10:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:37.683 19:10:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.683 19:10:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.683 19:10:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.683 19:10:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:37.683 19:10:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.683 19:10:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.683 [ 00:12:37.683 { 00:12:37.683 "name": "BaseBdev2", 00:12:37.683 "aliases": [ 00:12:37.683 "8463a393-4da6-4895-98c8-c750033ab52b" 00:12:37.683 ], 00:12:37.683 "product_name": "Malloc disk", 00:12:37.683 "block_size": 512, 00:12:37.683 "num_blocks": 65536, 00:12:37.683 "uuid": "8463a393-4da6-4895-98c8-c750033ab52b", 00:12:37.683 "assigned_rate_limits": { 00:12:37.683 "rw_ios_per_sec": 0, 00:12:37.683 "rw_mbytes_per_sec": 0, 00:12:37.683 "r_mbytes_per_sec": 0, 00:12:37.683 "w_mbytes_per_sec": 0 00:12:37.683 }, 00:12:37.683 "claimed": false, 00:12:37.683 "zoned": false, 00:12:37.683 "supported_io_types": { 00:12:37.683 "read": true, 00:12:37.683 "write": true, 00:12:37.683 "unmap": true, 00:12:37.683 "flush": true, 00:12:37.683 "reset": true, 00:12:37.683 "nvme_admin": false, 00:12:37.683 "nvme_io": false, 00:12:37.683 "nvme_io_md": false, 00:12:37.683 "write_zeroes": true, 00:12:37.683 "zcopy": true, 00:12:37.683 "get_zone_info": false, 00:12:37.683 "zone_management": false, 00:12:37.683 "zone_append": false, 00:12:37.683 "compare": false, 00:12:37.683 "compare_and_write": false, 00:12:37.683 "abort": true, 00:12:37.683 "seek_hole": false, 00:12:37.683 "seek_data": false, 00:12:37.683 "copy": true, 00:12:37.683 "nvme_iov_md": false 00:12:37.683 }, 00:12:37.683 "memory_domains": [ 00:12:37.683 { 00:12:37.683 "dma_device_id": "system", 00:12:37.683 "dma_device_type": 1 00:12:37.683 }, 00:12:37.683 { 00:12:37.683 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:37.683 "dma_device_type": 2 00:12:37.683 } 00:12:37.683 ], 00:12:37.683 "driver_specific": {} 00:12:37.683 } 00:12:37.683 ] 00:12:37.683 19:10:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.683 19:10:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:37.683 19:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:37.683 19:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:37.683 19:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:37.683 19:10:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.683 19:10:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.683 BaseBdev3 00:12:37.683 19:10:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.683 19:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:12:37.683 19:10:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:37.683 19:10:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:37.683 19:10:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:37.683 19:10:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:37.683 19:10:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:37.683 19:10:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:37.683 19:10:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.683 19:10:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.683 19:10:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.683 19:10:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:37.683 19:10:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.683 19:10:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.683 [ 00:12:37.683 { 00:12:37.683 "name": "BaseBdev3", 00:12:37.683 "aliases": [ 00:12:37.683 "8cd66a99-034e-4160-a4a7-d1b33bf60aba" 00:12:37.683 ], 00:12:37.683 "product_name": "Malloc disk", 00:12:37.683 "block_size": 512, 00:12:37.683 "num_blocks": 65536, 00:12:37.683 "uuid": "8cd66a99-034e-4160-a4a7-d1b33bf60aba", 00:12:37.683 "assigned_rate_limits": { 00:12:37.683 "rw_ios_per_sec": 0, 00:12:37.683 "rw_mbytes_per_sec": 0, 00:12:37.683 "r_mbytes_per_sec": 0, 00:12:37.683 "w_mbytes_per_sec": 0 00:12:37.683 }, 00:12:37.683 "claimed": false, 00:12:37.683 "zoned": false, 00:12:37.683 "supported_io_types": { 00:12:37.683 "read": true, 00:12:37.683 "write": true, 00:12:37.683 "unmap": true, 00:12:37.683 "flush": true, 00:12:37.683 "reset": true, 00:12:37.683 "nvme_admin": false, 00:12:37.683 "nvme_io": false, 00:12:37.683 "nvme_io_md": false, 00:12:37.683 "write_zeroes": true, 00:12:37.683 "zcopy": true, 00:12:37.683 "get_zone_info": false, 00:12:37.683 "zone_management": false, 00:12:37.683 "zone_append": false, 00:12:37.683 "compare": false, 00:12:37.683 "compare_and_write": false, 00:12:37.683 "abort": true, 00:12:37.683 "seek_hole": false, 00:12:37.683 "seek_data": false, 00:12:37.684 "copy": true, 00:12:37.684 "nvme_iov_md": false 00:12:37.684 }, 00:12:37.684 "memory_domains": [ 00:12:37.684 { 00:12:37.684 "dma_device_id": "system", 00:12:37.684 "dma_device_type": 1 00:12:37.684 }, 00:12:37.684 { 00:12:37.684 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:37.684 "dma_device_type": 2 00:12:37.684 } 00:12:37.684 ], 00:12:37.684 "driver_specific": {} 00:12:37.684 } 00:12:37.684 ] 00:12:37.684 19:10:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.684 19:10:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:37.684 19:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:37.684 19:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:37.684 19:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:37.684 19:10:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.684 19:10:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.945 BaseBdev4 00:12:37.945 19:10:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.945 19:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:12:37.945 19:10:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:12:37.945 19:10:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:37.945 19:10:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:37.945 19:10:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:37.945 19:10:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:37.945 19:10:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:37.945 19:10:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.945 19:10:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.945 19:10:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.945 19:10:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:37.945 19:10:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.945 19:10:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.945 [ 00:12:37.945 { 00:12:37.945 "name": "BaseBdev4", 00:12:37.945 "aliases": [ 00:12:37.945 "553026f0-dd36-4f6f-b30b-b75bc80eefe5" 00:12:37.945 ], 00:12:37.945 "product_name": "Malloc disk", 00:12:37.945 "block_size": 512, 00:12:37.945 "num_blocks": 65536, 00:12:37.945 "uuid": "553026f0-dd36-4f6f-b30b-b75bc80eefe5", 00:12:37.945 "assigned_rate_limits": { 00:12:37.945 "rw_ios_per_sec": 0, 00:12:37.945 "rw_mbytes_per_sec": 0, 00:12:37.945 "r_mbytes_per_sec": 0, 00:12:37.945 "w_mbytes_per_sec": 0 00:12:37.945 }, 00:12:37.945 "claimed": false, 00:12:37.945 "zoned": false, 00:12:37.945 "supported_io_types": { 00:12:37.945 "read": true, 00:12:37.945 "write": true, 00:12:37.945 "unmap": true, 00:12:37.945 "flush": true, 00:12:37.945 "reset": true, 00:12:37.945 "nvme_admin": false, 00:12:37.945 "nvme_io": false, 00:12:37.945 "nvme_io_md": false, 00:12:37.945 "write_zeroes": true, 00:12:37.945 "zcopy": true, 00:12:37.945 "get_zone_info": false, 00:12:37.945 "zone_management": false, 00:12:37.945 "zone_append": false, 00:12:37.945 "compare": false, 00:12:37.945 "compare_and_write": false, 00:12:37.945 "abort": true, 00:12:37.945 "seek_hole": false, 00:12:37.945 "seek_data": false, 00:12:37.945 "copy": true, 00:12:37.945 "nvme_iov_md": false 00:12:37.945 }, 00:12:37.945 "memory_domains": [ 00:12:37.945 { 00:12:37.945 "dma_device_id": "system", 00:12:37.945 "dma_device_type": 1 00:12:37.945 }, 00:12:37.945 { 00:12:37.945 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:37.945 "dma_device_type": 2 00:12:37.945 } 00:12:37.945 ], 00:12:37.945 "driver_specific": {} 00:12:37.945 } 00:12:37.945 ] 00:12:37.945 19:10:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.945 19:10:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:37.945 19:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:37.945 19:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:37.945 19:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:37.945 19:10:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.945 19:10:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.945 [2024-11-27 19:10:47.387876] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:37.945 [2024-11-27 19:10:47.387975] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:37.945 [2024-11-27 19:10:47.388022] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:37.945 [2024-11-27 19:10:47.390235] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:37.945 [2024-11-27 19:10:47.390328] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:37.945 19:10:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.945 19:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:37.945 19:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:37.945 19:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:37.945 19:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:37.945 19:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:37.945 19:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:37.945 19:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:37.945 19:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:37.945 19:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:37.945 19:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:37.945 19:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:37.945 19:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:37.945 19:10:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.946 19:10:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.946 19:10:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.946 19:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:37.946 "name": "Existed_Raid", 00:12:37.946 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:37.946 "strip_size_kb": 0, 00:12:37.946 "state": "configuring", 00:12:37.946 "raid_level": "raid1", 00:12:37.946 "superblock": false, 00:12:37.946 "num_base_bdevs": 4, 00:12:37.946 "num_base_bdevs_discovered": 3, 00:12:37.946 "num_base_bdevs_operational": 4, 00:12:37.946 "base_bdevs_list": [ 00:12:37.946 { 00:12:37.946 "name": "BaseBdev1", 00:12:37.946 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:37.946 "is_configured": false, 00:12:37.946 "data_offset": 0, 00:12:37.946 "data_size": 0 00:12:37.946 }, 00:12:37.946 { 00:12:37.946 "name": "BaseBdev2", 00:12:37.946 "uuid": "8463a393-4da6-4895-98c8-c750033ab52b", 00:12:37.946 "is_configured": true, 00:12:37.946 "data_offset": 0, 00:12:37.946 "data_size": 65536 00:12:37.946 }, 00:12:37.946 { 00:12:37.946 "name": "BaseBdev3", 00:12:37.946 "uuid": "8cd66a99-034e-4160-a4a7-d1b33bf60aba", 00:12:37.946 "is_configured": true, 00:12:37.946 "data_offset": 0, 00:12:37.946 "data_size": 65536 00:12:37.946 }, 00:12:37.946 { 00:12:37.946 "name": "BaseBdev4", 00:12:37.946 "uuid": "553026f0-dd36-4f6f-b30b-b75bc80eefe5", 00:12:37.946 "is_configured": true, 00:12:37.946 "data_offset": 0, 00:12:37.946 "data_size": 65536 00:12:37.946 } 00:12:37.946 ] 00:12:37.946 }' 00:12:37.946 19:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:37.946 19:10:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.516 19:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:38.516 19:10:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.516 19:10:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.516 [2024-11-27 19:10:47.899104] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:38.516 19:10:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.516 19:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:38.516 19:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:38.516 19:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:38.516 19:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:38.516 19:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:38.516 19:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:38.516 19:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:38.516 19:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:38.516 19:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:38.516 19:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:38.516 19:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:38.516 19:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:38.516 19:10:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.516 19:10:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.516 19:10:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.516 19:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:38.516 "name": "Existed_Raid", 00:12:38.516 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:38.516 "strip_size_kb": 0, 00:12:38.516 "state": "configuring", 00:12:38.516 "raid_level": "raid1", 00:12:38.517 "superblock": false, 00:12:38.517 "num_base_bdevs": 4, 00:12:38.517 "num_base_bdevs_discovered": 2, 00:12:38.517 "num_base_bdevs_operational": 4, 00:12:38.517 "base_bdevs_list": [ 00:12:38.517 { 00:12:38.517 "name": "BaseBdev1", 00:12:38.517 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:38.517 "is_configured": false, 00:12:38.517 "data_offset": 0, 00:12:38.517 "data_size": 0 00:12:38.517 }, 00:12:38.517 { 00:12:38.517 "name": null, 00:12:38.517 "uuid": "8463a393-4da6-4895-98c8-c750033ab52b", 00:12:38.517 "is_configured": false, 00:12:38.517 "data_offset": 0, 00:12:38.517 "data_size": 65536 00:12:38.517 }, 00:12:38.517 { 00:12:38.517 "name": "BaseBdev3", 00:12:38.517 "uuid": "8cd66a99-034e-4160-a4a7-d1b33bf60aba", 00:12:38.517 "is_configured": true, 00:12:38.517 "data_offset": 0, 00:12:38.517 "data_size": 65536 00:12:38.517 }, 00:12:38.517 { 00:12:38.517 "name": "BaseBdev4", 00:12:38.517 "uuid": "553026f0-dd36-4f6f-b30b-b75bc80eefe5", 00:12:38.517 "is_configured": true, 00:12:38.517 "data_offset": 0, 00:12:38.517 "data_size": 65536 00:12:38.517 } 00:12:38.517 ] 00:12:38.517 }' 00:12:38.517 19:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:38.517 19:10:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.776 19:10:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:38.776 19:10:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.776 19:10:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.776 19:10:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:38.776 19:10:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.776 19:10:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:12:38.776 19:10:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:38.776 19:10:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.777 19:10:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.036 [2024-11-27 19:10:48.452567] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:39.036 BaseBdev1 00:12:39.036 19:10:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.036 19:10:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:12:39.036 19:10:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:39.037 19:10:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:39.037 19:10:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:39.037 19:10:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:39.037 19:10:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:39.037 19:10:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:39.037 19:10:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.037 19:10:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.037 19:10:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.037 19:10:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:39.037 19:10:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.037 19:10:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.037 [ 00:12:39.037 { 00:12:39.037 "name": "BaseBdev1", 00:12:39.037 "aliases": [ 00:12:39.037 "f1b8474e-5e6a-484f-b1e8-cdb59fa064e3" 00:12:39.037 ], 00:12:39.037 "product_name": "Malloc disk", 00:12:39.037 "block_size": 512, 00:12:39.037 "num_blocks": 65536, 00:12:39.037 "uuid": "f1b8474e-5e6a-484f-b1e8-cdb59fa064e3", 00:12:39.037 "assigned_rate_limits": { 00:12:39.037 "rw_ios_per_sec": 0, 00:12:39.037 "rw_mbytes_per_sec": 0, 00:12:39.037 "r_mbytes_per_sec": 0, 00:12:39.037 "w_mbytes_per_sec": 0 00:12:39.037 }, 00:12:39.037 "claimed": true, 00:12:39.037 "claim_type": "exclusive_write", 00:12:39.037 "zoned": false, 00:12:39.037 "supported_io_types": { 00:12:39.037 "read": true, 00:12:39.037 "write": true, 00:12:39.037 "unmap": true, 00:12:39.037 "flush": true, 00:12:39.037 "reset": true, 00:12:39.037 "nvme_admin": false, 00:12:39.037 "nvme_io": false, 00:12:39.037 "nvme_io_md": false, 00:12:39.037 "write_zeroes": true, 00:12:39.037 "zcopy": true, 00:12:39.037 "get_zone_info": false, 00:12:39.037 "zone_management": false, 00:12:39.037 "zone_append": false, 00:12:39.037 "compare": false, 00:12:39.037 "compare_and_write": false, 00:12:39.037 "abort": true, 00:12:39.037 "seek_hole": false, 00:12:39.037 "seek_data": false, 00:12:39.037 "copy": true, 00:12:39.037 "nvme_iov_md": false 00:12:39.037 }, 00:12:39.037 "memory_domains": [ 00:12:39.037 { 00:12:39.037 "dma_device_id": "system", 00:12:39.037 "dma_device_type": 1 00:12:39.037 }, 00:12:39.037 { 00:12:39.037 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:39.037 "dma_device_type": 2 00:12:39.037 } 00:12:39.037 ], 00:12:39.037 "driver_specific": {} 00:12:39.037 } 00:12:39.037 ] 00:12:39.037 19:10:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.037 19:10:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:39.037 19:10:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:39.037 19:10:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:39.037 19:10:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:39.037 19:10:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:39.037 19:10:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:39.037 19:10:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:39.037 19:10:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:39.037 19:10:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:39.037 19:10:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:39.037 19:10:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:39.037 19:10:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:39.037 19:10:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.037 19:10:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:39.037 19:10:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.037 19:10:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.037 19:10:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:39.037 "name": "Existed_Raid", 00:12:39.037 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:39.037 "strip_size_kb": 0, 00:12:39.037 "state": "configuring", 00:12:39.037 "raid_level": "raid1", 00:12:39.037 "superblock": false, 00:12:39.037 "num_base_bdevs": 4, 00:12:39.037 "num_base_bdevs_discovered": 3, 00:12:39.037 "num_base_bdevs_operational": 4, 00:12:39.037 "base_bdevs_list": [ 00:12:39.037 { 00:12:39.037 "name": "BaseBdev1", 00:12:39.037 "uuid": "f1b8474e-5e6a-484f-b1e8-cdb59fa064e3", 00:12:39.037 "is_configured": true, 00:12:39.037 "data_offset": 0, 00:12:39.037 "data_size": 65536 00:12:39.037 }, 00:12:39.037 { 00:12:39.037 "name": null, 00:12:39.037 "uuid": "8463a393-4da6-4895-98c8-c750033ab52b", 00:12:39.037 "is_configured": false, 00:12:39.037 "data_offset": 0, 00:12:39.037 "data_size": 65536 00:12:39.037 }, 00:12:39.037 { 00:12:39.037 "name": "BaseBdev3", 00:12:39.037 "uuid": "8cd66a99-034e-4160-a4a7-d1b33bf60aba", 00:12:39.037 "is_configured": true, 00:12:39.037 "data_offset": 0, 00:12:39.037 "data_size": 65536 00:12:39.037 }, 00:12:39.037 { 00:12:39.037 "name": "BaseBdev4", 00:12:39.037 "uuid": "553026f0-dd36-4f6f-b30b-b75bc80eefe5", 00:12:39.037 "is_configured": true, 00:12:39.037 "data_offset": 0, 00:12:39.037 "data_size": 65536 00:12:39.037 } 00:12:39.037 ] 00:12:39.037 }' 00:12:39.037 19:10:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:39.037 19:10:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.608 19:10:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:39.608 19:10:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:39.608 19:10:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.608 19:10:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.608 19:10:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.608 19:10:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:12:39.608 19:10:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:12:39.608 19:10:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.608 19:10:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.608 [2024-11-27 19:10:49.039710] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:39.608 19:10:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.608 19:10:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:39.608 19:10:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:39.608 19:10:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:39.608 19:10:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:39.608 19:10:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:39.608 19:10:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:39.608 19:10:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:39.608 19:10:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:39.608 19:10:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:39.608 19:10:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:39.608 19:10:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:39.608 19:10:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.608 19:10:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.608 19:10:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:39.608 19:10:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.608 19:10:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:39.608 "name": "Existed_Raid", 00:12:39.608 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:39.608 "strip_size_kb": 0, 00:12:39.608 "state": "configuring", 00:12:39.608 "raid_level": "raid1", 00:12:39.608 "superblock": false, 00:12:39.608 "num_base_bdevs": 4, 00:12:39.608 "num_base_bdevs_discovered": 2, 00:12:39.608 "num_base_bdevs_operational": 4, 00:12:39.608 "base_bdevs_list": [ 00:12:39.608 { 00:12:39.608 "name": "BaseBdev1", 00:12:39.608 "uuid": "f1b8474e-5e6a-484f-b1e8-cdb59fa064e3", 00:12:39.608 "is_configured": true, 00:12:39.608 "data_offset": 0, 00:12:39.608 "data_size": 65536 00:12:39.608 }, 00:12:39.608 { 00:12:39.608 "name": null, 00:12:39.608 "uuid": "8463a393-4da6-4895-98c8-c750033ab52b", 00:12:39.608 "is_configured": false, 00:12:39.608 "data_offset": 0, 00:12:39.608 "data_size": 65536 00:12:39.608 }, 00:12:39.608 { 00:12:39.608 "name": null, 00:12:39.608 "uuid": "8cd66a99-034e-4160-a4a7-d1b33bf60aba", 00:12:39.608 "is_configured": false, 00:12:39.608 "data_offset": 0, 00:12:39.608 "data_size": 65536 00:12:39.608 }, 00:12:39.608 { 00:12:39.608 "name": "BaseBdev4", 00:12:39.608 "uuid": "553026f0-dd36-4f6f-b30b-b75bc80eefe5", 00:12:39.608 "is_configured": true, 00:12:39.608 "data_offset": 0, 00:12:39.608 "data_size": 65536 00:12:39.608 } 00:12:39.608 ] 00:12:39.608 }' 00:12:39.608 19:10:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:39.608 19:10:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.868 19:10:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:39.868 19:10:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:39.868 19:10:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.868 19:10:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.868 19:10:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.127 19:10:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:12:40.127 19:10:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:40.128 19:10:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.128 19:10:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.128 [2024-11-27 19:10:49.518851] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:40.128 19:10:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.128 19:10:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:40.128 19:10:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:40.128 19:10:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:40.128 19:10:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:40.128 19:10:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:40.128 19:10:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:40.128 19:10:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:40.128 19:10:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:40.128 19:10:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:40.128 19:10:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:40.128 19:10:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:40.128 19:10:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:40.128 19:10:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.128 19:10:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.128 19:10:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.128 19:10:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:40.128 "name": "Existed_Raid", 00:12:40.128 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:40.128 "strip_size_kb": 0, 00:12:40.128 "state": "configuring", 00:12:40.128 "raid_level": "raid1", 00:12:40.128 "superblock": false, 00:12:40.128 "num_base_bdevs": 4, 00:12:40.128 "num_base_bdevs_discovered": 3, 00:12:40.128 "num_base_bdevs_operational": 4, 00:12:40.128 "base_bdevs_list": [ 00:12:40.128 { 00:12:40.128 "name": "BaseBdev1", 00:12:40.128 "uuid": "f1b8474e-5e6a-484f-b1e8-cdb59fa064e3", 00:12:40.128 "is_configured": true, 00:12:40.128 "data_offset": 0, 00:12:40.128 "data_size": 65536 00:12:40.128 }, 00:12:40.128 { 00:12:40.128 "name": null, 00:12:40.128 "uuid": "8463a393-4da6-4895-98c8-c750033ab52b", 00:12:40.128 "is_configured": false, 00:12:40.128 "data_offset": 0, 00:12:40.128 "data_size": 65536 00:12:40.128 }, 00:12:40.128 { 00:12:40.128 "name": "BaseBdev3", 00:12:40.128 "uuid": "8cd66a99-034e-4160-a4a7-d1b33bf60aba", 00:12:40.128 "is_configured": true, 00:12:40.128 "data_offset": 0, 00:12:40.128 "data_size": 65536 00:12:40.128 }, 00:12:40.128 { 00:12:40.128 "name": "BaseBdev4", 00:12:40.128 "uuid": "553026f0-dd36-4f6f-b30b-b75bc80eefe5", 00:12:40.128 "is_configured": true, 00:12:40.128 "data_offset": 0, 00:12:40.128 "data_size": 65536 00:12:40.128 } 00:12:40.128 ] 00:12:40.128 }' 00:12:40.128 19:10:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:40.128 19:10:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.388 19:10:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:40.388 19:10:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.388 19:10:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.388 19:10:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:40.388 19:10:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.388 19:10:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:12:40.388 19:10:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:40.388 19:10:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.388 19:10:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.388 [2024-11-27 19:10:49.966145] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:40.668 19:10:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.668 19:10:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:40.668 19:10:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:40.668 19:10:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:40.668 19:10:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:40.668 19:10:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:40.668 19:10:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:40.668 19:10:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:40.668 19:10:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:40.669 19:10:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:40.669 19:10:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:40.669 19:10:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:40.669 19:10:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.669 19:10:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.669 19:10:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:40.669 19:10:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.669 19:10:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:40.669 "name": "Existed_Raid", 00:12:40.669 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:40.669 "strip_size_kb": 0, 00:12:40.669 "state": "configuring", 00:12:40.669 "raid_level": "raid1", 00:12:40.669 "superblock": false, 00:12:40.669 "num_base_bdevs": 4, 00:12:40.669 "num_base_bdevs_discovered": 2, 00:12:40.669 "num_base_bdevs_operational": 4, 00:12:40.669 "base_bdevs_list": [ 00:12:40.669 { 00:12:40.669 "name": null, 00:12:40.669 "uuid": "f1b8474e-5e6a-484f-b1e8-cdb59fa064e3", 00:12:40.669 "is_configured": false, 00:12:40.669 "data_offset": 0, 00:12:40.669 "data_size": 65536 00:12:40.669 }, 00:12:40.669 { 00:12:40.669 "name": null, 00:12:40.669 "uuid": "8463a393-4da6-4895-98c8-c750033ab52b", 00:12:40.669 "is_configured": false, 00:12:40.669 "data_offset": 0, 00:12:40.669 "data_size": 65536 00:12:40.669 }, 00:12:40.669 { 00:12:40.669 "name": "BaseBdev3", 00:12:40.669 "uuid": "8cd66a99-034e-4160-a4a7-d1b33bf60aba", 00:12:40.669 "is_configured": true, 00:12:40.669 "data_offset": 0, 00:12:40.669 "data_size": 65536 00:12:40.669 }, 00:12:40.669 { 00:12:40.669 "name": "BaseBdev4", 00:12:40.669 "uuid": "553026f0-dd36-4f6f-b30b-b75bc80eefe5", 00:12:40.669 "is_configured": true, 00:12:40.669 "data_offset": 0, 00:12:40.669 "data_size": 65536 00:12:40.669 } 00:12:40.669 ] 00:12:40.669 }' 00:12:40.669 19:10:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:40.669 19:10:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.945 19:10:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:40.945 19:10:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:40.945 19:10:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.945 19:10:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.945 19:10:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.945 19:10:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:12:40.945 19:10:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:40.945 19:10:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.945 19:10:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.945 [2024-11-27 19:10:50.507430] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:40.945 19:10:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.945 19:10:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:40.945 19:10:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:40.945 19:10:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:40.945 19:10:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:40.945 19:10:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:40.945 19:10:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:40.945 19:10:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:40.945 19:10:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:40.945 19:10:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:40.945 19:10:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:40.945 19:10:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:40.945 19:10:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:40.945 19:10:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.945 19:10:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.945 19:10:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.945 19:10:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:40.945 "name": "Existed_Raid", 00:12:40.945 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:40.945 "strip_size_kb": 0, 00:12:40.945 "state": "configuring", 00:12:40.945 "raid_level": "raid1", 00:12:40.945 "superblock": false, 00:12:40.945 "num_base_bdevs": 4, 00:12:40.945 "num_base_bdevs_discovered": 3, 00:12:40.945 "num_base_bdevs_operational": 4, 00:12:40.945 "base_bdevs_list": [ 00:12:40.945 { 00:12:40.945 "name": null, 00:12:40.945 "uuid": "f1b8474e-5e6a-484f-b1e8-cdb59fa064e3", 00:12:40.945 "is_configured": false, 00:12:40.945 "data_offset": 0, 00:12:40.945 "data_size": 65536 00:12:40.945 }, 00:12:40.945 { 00:12:40.945 "name": "BaseBdev2", 00:12:40.945 "uuid": "8463a393-4da6-4895-98c8-c750033ab52b", 00:12:40.945 "is_configured": true, 00:12:40.945 "data_offset": 0, 00:12:40.945 "data_size": 65536 00:12:40.945 }, 00:12:40.945 { 00:12:40.945 "name": "BaseBdev3", 00:12:40.945 "uuid": "8cd66a99-034e-4160-a4a7-d1b33bf60aba", 00:12:40.945 "is_configured": true, 00:12:40.945 "data_offset": 0, 00:12:40.945 "data_size": 65536 00:12:40.945 }, 00:12:40.945 { 00:12:40.945 "name": "BaseBdev4", 00:12:40.945 "uuid": "553026f0-dd36-4f6f-b30b-b75bc80eefe5", 00:12:40.945 "is_configured": true, 00:12:40.945 "data_offset": 0, 00:12:40.946 "data_size": 65536 00:12:40.946 } 00:12:40.946 ] 00:12:40.946 }' 00:12:40.946 19:10:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:40.946 19:10:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.515 19:10:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:41.516 19:10:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:41.516 19:10:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.516 19:10:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.516 19:10:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.516 19:10:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:12:41.516 19:10:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:41.516 19:10:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.516 19:10:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.516 19:10:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:41.516 19:10:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.516 19:10:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u f1b8474e-5e6a-484f-b1e8-cdb59fa064e3 00:12:41.516 19:10:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.516 19:10:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.516 [2024-11-27 19:10:51.040542] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:41.516 [2024-11-27 19:10:51.040611] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:41.516 [2024-11-27 19:10:51.040621] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:12:41.516 [2024-11-27 19:10:51.040958] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:12:41.516 [2024-11-27 19:10:51.041150] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:41.516 [2024-11-27 19:10:51.041168] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:12:41.516 [2024-11-27 19:10:51.041449] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:41.516 NewBaseBdev 00:12:41.516 19:10:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.516 19:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:12:41.516 19:10:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:12:41.516 19:10:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:41.516 19:10:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:41.516 19:10:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:41.516 19:10:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:41.516 19:10:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:41.516 19:10:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.516 19:10:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.516 19:10:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.516 19:10:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:41.516 19:10:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.516 19:10:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.516 [ 00:12:41.516 { 00:12:41.516 "name": "NewBaseBdev", 00:12:41.516 "aliases": [ 00:12:41.516 "f1b8474e-5e6a-484f-b1e8-cdb59fa064e3" 00:12:41.516 ], 00:12:41.516 "product_name": "Malloc disk", 00:12:41.516 "block_size": 512, 00:12:41.516 "num_blocks": 65536, 00:12:41.516 "uuid": "f1b8474e-5e6a-484f-b1e8-cdb59fa064e3", 00:12:41.516 "assigned_rate_limits": { 00:12:41.516 "rw_ios_per_sec": 0, 00:12:41.516 "rw_mbytes_per_sec": 0, 00:12:41.516 "r_mbytes_per_sec": 0, 00:12:41.516 "w_mbytes_per_sec": 0 00:12:41.516 }, 00:12:41.516 "claimed": true, 00:12:41.516 "claim_type": "exclusive_write", 00:12:41.516 "zoned": false, 00:12:41.516 "supported_io_types": { 00:12:41.516 "read": true, 00:12:41.516 "write": true, 00:12:41.516 "unmap": true, 00:12:41.516 "flush": true, 00:12:41.516 "reset": true, 00:12:41.516 "nvme_admin": false, 00:12:41.516 "nvme_io": false, 00:12:41.516 "nvme_io_md": false, 00:12:41.516 "write_zeroes": true, 00:12:41.516 "zcopy": true, 00:12:41.516 "get_zone_info": false, 00:12:41.516 "zone_management": false, 00:12:41.516 "zone_append": false, 00:12:41.516 "compare": false, 00:12:41.516 "compare_and_write": false, 00:12:41.516 "abort": true, 00:12:41.516 "seek_hole": false, 00:12:41.516 "seek_data": false, 00:12:41.516 "copy": true, 00:12:41.516 "nvme_iov_md": false 00:12:41.516 }, 00:12:41.516 "memory_domains": [ 00:12:41.516 { 00:12:41.516 "dma_device_id": "system", 00:12:41.516 "dma_device_type": 1 00:12:41.516 }, 00:12:41.516 { 00:12:41.516 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:41.516 "dma_device_type": 2 00:12:41.516 } 00:12:41.516 ], 00:12:41.516 "driver_specific": {} 00:12:41.516 } 00:12:41.516 ] 00:12:41.516 19:10:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.516 19:10:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:41.516 19:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:12:41.516 19:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:41.516 19:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:41.516 19:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:41.516 19:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:41.516 19:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:41.516 19:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:41.516 19:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:41.516 19:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:41.516 19:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:41.516 19:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:41.516 19:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:41.516 19:10:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.516 19:10:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.516 19:10:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.516 19:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:41.516 "name": "Existed_Raid", 00:12:41.516 "uuid": "7557211a-309d-49b1-a50c-8a1f4adb0b35", 00:12:41.516 "strip_size_kb": 0, 00:12:41.516 "state": "online", 00:12:41.516 "raid_level": "raid1", 00:12:41.516 "superblock": false, 00:12:41.516 "num_base_bdevs": 4, 00:12:41.516 "num_base_bdevs_discovered": 4, 00:12:41.516 "num_base_bdevs_operational": 4, 00:12:41.516 "base_bdevs_list": [ 00:12:41.516 { 00:12:41.516 "name": "NewBaseBdev", 00:12:41.516 "uuid": "f1b8474e-5e6a-484f-b1e8-cdb59fa064e3", 00:12:41.516 "is_configured": true, 00:12:41.516 "data_offset": 0, 00:12:41.516 "data_size": 65536 00:12:41.516 }, 00:12:41.516 { 00:12:41.516 "name": "BaseBdev2", 00:12:41.516 "uuid": "8463a393-4da6-4895-98c8-c750033ab52b", 00:12:41.516 "is_configured": true, 00:12:41.516 "data_offset": 0, 00:12:41.516 "data_size": 65536 00:12:41.516 }, 00:12:41.516 { 00:12:41.516 "name": "BaseBdev3", 00:12:41.516 "uuid": "8cd66a99-034e-4160-a4a7-d1b33bf60aba", 00:12:41.516 "is_configured": true, 00:12:41.516 "data_offset": 0, 00:12:41.516 "data_size": 65536 00:12:41.516 }, 00:12:41.516 { 00:12:41.516 "name": "BaseBdev4", 00:12:41.516 "uuid": "553026f0-dd36-4f6f-b30b-b75bc80eefe5", 00:12:41.516 "is_configured": true, 00:12:41.516 "data_offset": 0, 00:12:41.516 "data_size": 65536 00:12:41.516 } 00:12:41.516 ] 00:12:41.516 }' 00:12:41.516 19:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:41.516 19:10:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.087 19:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:12:42.087 19:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:42.087 19:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:42.087 19:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:42.087 19:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:42.087 19:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:42.087 19:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:42.087 19:10:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.087 19:10:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.087 19:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:42.087 [2024-11-27 19:10:51.480246] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:42.087 19:10:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.087 19:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:42.087 "name": "Existed_Raid", 00:12:42.087 "aliases": [ 00:12:42.087 "7557211a-309d-49b1-a50c-8a1f4adb0b35" 00:12:42.087 ], 00:12:42.087 "product_name": "Raid Volume", 00:12:42.087 "block_size": 512, 00:12:42.087 "num_blocks": 65536, 00:12:42.087 "uuid": "7557211a-309d-49b1-a50c-8a1f4adb0b35", 00:12:42.087 "assigned_rate_limits": { 00:12:42.087 "rw_ios_per_sec": 0, 00:12:42.087 "rw_mbytes_per_sec": 0, 00:12:42.087 "r_mbytes_per_sec": 0, 00:12:42.087 "w_mbytes_per_sec": 0 00:12:42.087 }, 00:12:42.087 "claimed": false, 00:12:42.087 "zoned": false, 00:12:42.087 "supported_io_types": { 00:12:42.087 "read": true, 00:12:42.087 "write": true, 00:12:42.087 "unmap": false, 00:12:42.087 "flush": false, 00:12:42.087 "reset": true, 00:12:42.087 "nvme_admin": false, 00:12:42.087 "nvme_io": false, 00:12:42.087 "nvme_io_md": false, 00:12:42.087 "write_zeroes": true, 00:12:42.087 "zcopy": false, 00:12:42.087 "get_zone_info": false, 00:12:42.087 "zone_management": false, 00:12:42.087 "zone_append": false, 00:12:42.087 "compare": false, 00:12:42.087 "compare_and_write": false, 00:12:42.087 "abort": false, 00:12:42.087 "seek_hole": false, 00:12:42.087 "seek_data": false, 00:12:42.087 "copy": false, 00:12:42.087 "nvme_iov_md": false 00:12:42.087 }, 00:12:42.087 "memory_domains": [ 00:12:42.087 { 00:12:42.087 "dma_device_id": "system", 00:12:42.087 "dma_device_type": 1 00:12:42.087 }, 00:12:42.087 { 00:12:42.087 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:42.087 "dma_device_type": 2 00:12:42.087 }, 00:12:42.087 { 00:12:42.087 "dma_device_id": "system", 00:12:42.087 "dma_device_type": 1 00:12:42.087 }, 00:12:42.087 { 00:12:42.087 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:42.087 "dma_device_type": 2 00:12:42.087 }, 00:12:42.087 { 00:12:42.087 "dma_device_id": "system", 00:12:42.087 "dma_device_type": 1 00:12:42.087 }, 00:12:42.087 { 00:12:42.087 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:42.087 "dma_device_type": 2 00:12:42.087 }, 00:12:42.087 { 00:12:42.087 "dma_device_id": "system", 00:12:42.087 "dma_device_type": 1 00:12:42.087 }, 00:12:42.087 { 00:12:42.087 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:42.087 "dma_device_type": 2 00:12:42.087 } 00:12:42.087 ], 00:12:42.087 "driver_specific": { 00:12:42.087 "raid": { 00:12:42.087 "uuid": "7557211a-309d-49b1-a50c-8a1f4adb0b35", 00:12:42.087 "strip_size_kb": 0, 00:12:42.087 "state": "online", 00:12:42.087 "raid_level": "raid1", 00:12:42.087 "superblock": false, 00:12:42.087 "num_base_bdevs": 4, 00:12:42.087 "num_base_bdevs_discovered": 4, 00:12:42.087 "num_base_bdevs_operational": 4, 00:12:42.087 "base_bdevs_list": [ 00:12:42.087 { 00:12:42.087 "name": "NewBaseBdev", 00:12:42.087 "uuid": "f1b8474e-5e6a-484f-b1e8-cdb59fa064e3", 00:12:42.087 "is_configured": true, 00:12:42.087 "data_offset": 0, 00:12:42.087 "data_size": 65536 00:12:42.087 }, 00:12:42.087 { 00:12:42.087 "name": "BaseBdev2", 00:12:42.087 "uuid": "8463a393-4da6-4895-98c8-c750033ab52b", 00:12:42.087 "is_configured": true, 00:12:42.087 "data_offset": 0, 00:12:42.087 "data_size": 65536 00:12:42.087 }, 00:12:42.087 { 00:12:42.087 "name": "BaseBdev3", 00:12:42.087 "uuid": "8cd66a99-034e-4160-a4a7-d1b33bf60aba", 00:12:42.087 "is_configured": true, 00:12:42.087 "data_offset": 0, 00:12:42.087 "data_size": 65536 00:12:42.087 }, 00:12:42.087 { 00:12:42.087 "name": "BaseBdev4", 00:12:42.087 "uuid": "553026f0-dd36-4f6f-b30b-b75bc80eefe5", 00:12:42.087 "is_configured": true, 00:12:42.087 "data_offset": 0, 00:12:42.087 "data_size": 65536 00:12:42.087 } 00:12:42.087 ] 00:12:42.087 } 00:12:42.087 } 00:12:42.087 }' 00:12:42.087 19:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:42.087 19:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:12:42.087 BaseBdev2 00:12:42.087 BaseBdev3 00:12:42.087 BaseBdev4' 00:12:42.087 19:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:42.087 19:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:42.087 19:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:42.087 19:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:12:42.087 19:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:42.087 19:10:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.087 19:10:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.087 19:10:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.087 19:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:42.087 19:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:42.087 19:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:42.087 19:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:42.087 19:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:42.087 19:10:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.087 19:10:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.087 19:10:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.087 19:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:42.087 19:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:42.087 19:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:42.087 19:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:42.087 19:10:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.087 19:10:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.087 19:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:42.087 19:10:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.348 19:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:42.348 19:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:42.348 19:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:42.348 19:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:42.348 19:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:42.348 19:10:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.348 19:10:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.348 19:10:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.348 19:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:42.348 19:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:42.348 19:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:42.348 19:10:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.348 19:10:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.348 [2024-11-27 19:10:51.791352] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:42.348 [2024-11-27 19:10:51.791434] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:42.348 [2024-11-27 19:10:51.791544] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:42.348 [2024-11-27 19:10:51.791901] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:42.348 [2024-11-27 19:10:51.791919] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:12:42.348 19:10:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.348 19:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 73290 00:12:42.348 19:10:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 73290 ']' 00:12:42.348 19:10:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 73290 00:12:42.348 19:10:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:12:42.348 19:10:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:42.348 19:10:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73290 00:12:42.348 19:10:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:42.348 19:10:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:42.348 19:10:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73290' 00:12:42.348 killing process with pid 73290 00:12:42.348 19:10:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 73290 00:12:42.348 [2024-11-27 19:10:51.829938] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:42.348 19:10:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 73290 00:12:42.918 [2024-11-27 19:10:52.259226] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:43.858 19:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:12:43.858 00:12:43.858 real 0m11.860s 00:12:43.858 user 0m18.447s 00:12:43.858 sys 0m2.348s 00:12:43.858 19:10:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:43.858 19:10:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.858 ************************************ 00:12:43.858 END TEST raid_state_function_test 00:12:43.858 ************************************ 00:12:44.118 19:10:53 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:12:44.118 19:10:53 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:44.118 19:10:53 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:44.118 19:10:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:44.118 ************************************ 00:12:44.118 START TEST raid_state_function_test_sb 00:12:44.118 ************************************ 00:12:44.118 19:10:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 4 true 00:12:44.118 19:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:12:44.118 19:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:12:44.118 19:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:12:44.118 19:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:44.118 19:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:44.118 19:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:44.118 19:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:44.118 19:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:44.118 19:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:44.118 19:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:44.118 19:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:44.118 19:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:44.118 19:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:12:44.118 19:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:44.118 19:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:44.118 19:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:12:44.118 19:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:44.118 19:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:44.118 19:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:44.118 19:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:44.118 19:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:44.118 19:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:44.118 19:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:44.118 19:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:44.118 19:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:12:44.118 19:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:12:44.118 19:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:12:44.118 19:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:12:44.118 19:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=73961 00:12:44.118 19:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:44.118 19:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73961' 00:12:44.118 Process raid pid: 73961 00:12:44.118 19:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 73961 00:12:44.118 19:10:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 73961 ']' 00:12:44.118 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:44.118 19:10:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:44.118 19:10:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:44.118 19:10:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:44.118 19:10:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:44.118 19:10:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:44.118 [2024-11-27 19:10:53.652317] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:12:44.118 [2024-11-27 19:10:53.652494] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:44.378 [2024-11-27 19:10:53.812686] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:44.378 [2024-11-27 19:10:53.946154] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:44.638 [2024-11-27 19:10:54.183883] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:44.638 [2024-11-27 19:10:54.183935] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:44.897 19:10:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:44.897 19:10:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:12:44.897 19:10:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:44.898 19:10:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.898 19:10:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:44.898 [2024-11-27 19:10:54.493435] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:44.898 [2024-11-27 19:10:54.493583] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:44.898 [2024-11-27 19:10:54.493599] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:44.898 [2024-11-27 19:10:54.493609] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:44.898 [2024-11-27 19:10:54.493615] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:44.898 [2024-11-27 19:10:54.493626] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:44.898 [2024-11-27 19:10:54.493632] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:44.898 [2024-11-27 19:10:54.493642] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:44.898 19:10:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.898 19:10:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:44.898 19:10:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:44.898 19:10:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:44.898 19:10:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:44.898 19:10:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:44.898 19:10:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:44.898 19:10:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:44.898 19:10:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:44.898 19:10:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:44.898 19:10:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:44.898 19:10:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:44.898 19:10:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:44.898 19:10:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.898 19:10:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:44.898 19:10:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.158 19:10:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:45.158 "name": "Existed_Raid", 00:12:45.158 "uuid": "0b0e4c34-d70c-4626-a699-522b58a3c4da", 00:12:45.158 "strip_size_kb": 0, 00:12:45.158 "state": "configuring", 00:12:45.158 "raid_level": "raid1", 00:12:45.158 "superblock": true, 00:12:45.158 "num_base_bdevs": 4, 00:12:45.158 "num_base_bdevs_discovered": 0, 00:12:45.158 "num_base_bdevs_operational": 4, 00:12:45.158 "base_bdevs_list": [ 00:12:45.158 { 00:12:45.158 "name": "BaseBdev1", 00:12:45.158 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:45.158 "is_configured": false, 00:12:45.158 "data_offset": 0, 00:12:45.158 "data_size": 0 00:12:45.158 }, 00:12:45.158 { 00:12:45.158 "name": "BaseBdev2", 00:12:45.158 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:45.158 "is_configured": false, 00:12:45.158 "data_offset": 0, 00:12:45.158 "data_size": 0 00:12:45.158 }, 00:12:45.158 { 00:12:45.158 "name": "BaseBdev3", 00:12:45.158 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:45.158 "is_configured": false, 00:12:45.158 "data_offset": 0, 00:12:45.158 "data_size": 0 00:12:45.158 }, 00:12:45.158 { 00:12:45.158 "name": "BaseBdev4", 00:12:45.158 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:45.158 "is_configured": false, 00:12:45.158 "data_offset": 0, 00:12:45.158 "data_size": 0 00:12:45.158 } 00:12:45.158 ] 00:12:45.158 }' 00:12:45.158 19:10:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:45.158 19:10:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:45.418 19:10:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:45.418 19:10:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.418 19:10:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:45.418 [2024-11-27 19:10:54.960555] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:45.418 [2024-11-27 19:10:54.960703] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:12:45.418 19:10:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.418 19:10:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:45.418 19:10:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.418 19:10:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:45.418 [2024-11-27 19:10:54.968489] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:45.418 [2024-11-27 19:10:54.968581] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:45.418 [2024-11-27 19:10:54.968610] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:45.418 [2024-11-27 19:10:54.968634] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:45.418 [2024-11-27 19:10:54.968652] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:45.418 [2024-11-27 19:10:54.968674] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:45.418 [2024-11-27 19:10:54.968702] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:45.418 [2024-11-27 19:10:54.968741] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:45.418 19:10:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.418 19:10:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:45.418 19:10:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.418 19:10:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:45.418 [2024-11-27 19:10:55.018319] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:45.418 BaseBdev1 00:12:45.418 19:10:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.418 19:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:45.418 19:10:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:45.418 19:10:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:45.418 19:10:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:45.418 19:10:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:45.418 19:10:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:45.418 19:10:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:45.418 19:10:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.418 19:10:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:45.418 19:10:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.418 19:10:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:45.418 19:10:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.418 19:10:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:45.418 [ 00:12:45.418 { 00:12:45.418 "name": "BaseBdev1", 00:12:45.418 "aliases": [ 00:12:45.418 "8d316578-dbeb-4f1f-87b1-d8225e754c32" 00:12:45.418 ], 00:12:45.418 "product_name": "Malloc disk", 00:12:45.418 "block_size": 512, 00:12:45.418 "num_blocks": 65536, 00:12:45.418 "uuid": "8d316578-dbeb-4f1f-87b1-d8225e754c32", 00:12:45.418 "assigned_rate_limits": { 00:12:45.418 "rw_ios_per_sec": 0, 00:12:45.418 "rw_mbytes_per_sec": 0, 00:12:45.418 "r_mbytes_per_sec": 0, 00:12:45.418 "w_mbytes_per_sec": 0 00:12:45.418 }, 00:12:45.418 "claimed": true, 00:12:45.418 "claim_type": "exclusive_write", 00:12:45.418 "zoned": false, 00:12:45.418 "supported_io_types": { 00:12:45.418 "read": true, 00:12:45.418 "write": true, 00:12:45.418 "unmap": true, 00:12:45.418 "flush": true, 00:12:45.418 "reset": true, 00:12:45.418 "nvme_admin": false, 00:12:45.418 "nvme_io": false, 00:12:45.418 "nvme_io_md": false, 00:12:45.418 "write_zeroes": true, 00:12:45.418 "zcopy": true, 00:12:45.418 "get_zone_info": false, 00:12:45.418 "zone_management": false, 00:12:45.418 "zone_append": false, 00:12:45.418 "compare": false, 00:12:45.418 "compare_and_write": false, 00:12:45.418 "abort": true, 00:12:45.418 "seek_hole": false, 00:12:45.418 "seek_data": false, 00:12:45.418 "copy": true, 00:12:45.678 "nvme_iov_md": false 00:12:45.678 }, 00:12:45.678 "memory_domains": [ 00:12:45.678 { 00:12:45.678 "dma_device_id": "system", 00:12:45.678 "dma_device_type": 1 00:12:45.678 }, 00:12:45.678 { 00:12:45.678 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:45.678 "dma_device_type": 2 00:12:45.678 } 00:12:45.678 ], 00:12:45.678 "driver_specific": {} 00:12:45.678 } 00:12:45.678 ] 00:12:45.678 19:10:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.678 19:10:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:45.678 19:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:45.678 19:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:45.678 19:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:45.678 19:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:45.678 19:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:45.678 19:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:45.678 19:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:45.678 19:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:45.678 19:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:45.678 19:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:45.678 19:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:45.678 19:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:45.678 19:10:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.678 19:10:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:45.678 19:10:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.678 19:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:45.678 "name": "Existed_Raid", 00:12:45.678 "uuid": "9262545a-3ba2-428b-a6f2-a071cbbeab69", 00:12:45.678 "strip_size_kb": 0, 00:12:45.678 "state": "configuring", 00:12:45.678 "raid_level": "raid1", 00:12:45.678 "superblock": true, 00:12:45.678 "num_base_bdevs": 4, 00:12:45.678 "num_base_bdevs_discovered": 1, 00:12:45.678 "num_base_bdevs_operational": 4, 00:12:45.678 "base_bdevs_list": [ 00:12:45.678 { 00:12:45.678 "name": "BaseBdev1", 00:12:45.678 "uuid": "8d316578-dbeb-4f1f-87b1-d8225e754c32", 00:12:45.678 "is_configured": true, 00:12:45.679 "data_offset": 2048, 00:12:45.679 "data_size": 63488 00:12:45.679 }, 00:12:45.679 { 00:12:45.679 "name": "BaseBdev2", 00:12:45.679 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:45.679 "is_configured": false, 00:12:45.679 "data_offset": 0, 00:12:45.679 "data_size": 0 00:12:45.679 }, 00:12:45.679 { 00:12:45.679 "name": "BaseBdev3", 00:12:45.679 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:45.679 "is_configured": false, 00:12:45.679 "data_offset": 0, 00:12:45.679 "data_size": 0 00:12:45.679 }, 00:12:45.679 { 00:12:45.679 "name": "BaseBdev4", 00:12:45.679 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:45.679 "is_configured": false, 00:12:45.679 "data_offset": 0, 00:12:45.679 "data_size": 0 00:12:45.679 } 00:12:45.679 ] 00:12:45.679 }' 00:12:45.679 19:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:45.679 19:10:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:45.939 19:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:45.939 19:10:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.939 19:10:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:45.939 [2024-11-27 19:10:55.481605] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:45.939 [2024-11-27 19:10:55.481777] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:12:45.939 19:10:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.939 19:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:45.939 19:10:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.939 19:10:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:45.939 [2024-11-27 19:10:55.493642] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:45.939 [2024-11-27 19:10:55.495952] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:45.939 [2024-11-27 19:10:55.496037] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:45.939 [2024-11-27 19:10:55.496068] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:45.939 [2024-11-27 19:10:55.496093] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:45.939 [2024-11-27 19:10:55.496112] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:45.939 [2024-11-27 19:10:55.496133] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:45.939 19:10:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.939 19:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:45.939 19:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:45.939 19:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:45.939 19:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:45.939 19:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:45.939 19:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:45.939 19:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:45.939 19:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:45.939 19:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:45.939 19:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:45.939 19:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:45.939 19:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:45.939 19:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:45.939 19:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:45.939 19:10:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.939 19:10:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:45.939 19:10:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.939 19:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:45.939 "name": "Existed_Raid", 00:12:45.939 "uuid": "18919706-99b0-43a8-a346-c68ad337b496", 00:12:45.939 "strip_size_kb": 0, 00:12:45.939 "state": "configuring", 00:12:45.939 "raid_level": "raid1", 00:12:45.939 "superblock": true, 00:12:45.939 "num_base_bdevs": 4, 00:12:45.939 "num_base_bdevs_discovered": 1, 00:12:45.939 "num_base_bdevs_operational": 4, 00:12:45.939 "base_bdevs_list": [ 00:12:45.939 { 00:12:45.939 "name": "BaseBdev1", 00:12:45.939 "uuid": "8d316578-dbeb-4f1f-87b1-d8225e754c32", 00:12:45.939 "is_configured": true, 00:12:45.939 "data_offset": 2048, 00:12:45.939 "data_size": 63488 00:12:45.939 }, 00:12:45.939 { 00:12:45.939 "name": "BaseBdev2", 00:12:45.939 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:45.939 "is_configured": false, 00:12:45.939 "data_offset": 0, 00:12:45.939 "data_size": 0 00:12:45.939 }, 00:12:45.939 { 00:12:45.939 "name": "BaseBdev3", 00:12:45.939 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:45.939 "is_configured": false, 00:12:45.939 "data_offset": 0, 00:12:45.939 "data_size": 0 00:12:45.939 }, 00:12:45.939 { 00:12:45.939 "name": "BaseBdev4", 00:12:45.939 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:45.939 "is_configured": false, 00:12:45.939 "data_offset": 0, 00:12:45.939 "data_size": 0 00:12:45.939 } 00:12:45.939 ] 00:12:45.939 }' 00:12:45.939 19:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:45.939 19:10:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.508 19:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:46.508 19:10:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.508 19:10:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.508 [2024-11-27 19:10:55.986355] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:46.508 BaseBdev2 00:12:46.508 19:10:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.508 19:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:46.508 19:10:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:46.508 19:10:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:46.508 19:10:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:46.508 19:10:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:46.508 19:10:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:46.508 19:10:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:46.508 19:10:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.508 19:10:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.508 19:10:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.508 19:10:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:46.508 19:10:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.508 19:10:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.508 [ 00:12:46.508 { 00:12:46.508 "name": "BaseBdev2", 00:12:46.508 "aliases": [ 00:12:46.508 "add8512f-a70a-49ce-9b83-f18040b8e17b" 00:12:46.508 ], 00:12:46.508 "product_name": "Malloc disk", 00:12:46.508 "block_size": 512, 00:12:46.508 "num_blocks": 65536, 00:12:46.508 "uuid": "add8512f-a70a-49ce-9b83-f18040b8e17b", 00:12:46.508 "assigned_rate_limits": { 00:12:46.508 "rw_ios_per_sec": 0, 00:12:46.508 "rw_mbytes_per_sec": 0, 00:12:46.508 "r_mbytes_per_sec": 0, 00:12:46.508 "w_mbytes_per_sec": 0 00:12:46.508 }, 00:12:46.508 "claimed": true, 00:12:46.508 "claim_type": "exclusive_write", 00:12:46.508 "zoned": false, 00:12:46.508 "supported_io_types": { 00:12:46.508 "read": true, 00:12:46.508 "write": true, 00:12:46.508 "unmap": true, 00:12:46.508 "flush": true, 00:12:46.508 "reset": true, 00:12:46.508 "nvme_admin": false, 00:12:46.508 "nvme_io": false, 00:12:46.508 "nvme_io_md": false, 00:12:46.508 "write_zeroes": true, 00:12:46.508 "zcopy": true, 00:12:46.508 "get_zone_info": false, 00:12:46.508 "zone_management": false, 00:12:46.508 "zone_append": false, 00:12:46.508 "compare": false, 00:12:46.508 "compare_and_write": false, 00:12:46.508 "abort": true, 00:12:46.508 "seek_hole": false, 00:12:46.508 "seek_data": false, 00:12:46.508 "copy": true, 00:12:46.508 "nvme_iov_md": false 00:12:46.508 }, 00:12:46.508 "memory_domains": [ 00:12:46.508 { 00:12:46.508 "dma_device_id": "system", 00:12:46.508 "dma_device_type": 1 00:12:46.508 }, 00:12:46.508 { 00:12:46.508 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:46.508 "dma_device_type": 2 00:12:46.508 } 00:12:46.508 ], 00:12:46.508 "driver_specific": {} 00:12:46.508 } 00:12:46.508 ] 00:12:46.508 19:10:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.508 19:10:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:46.508 19:10:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:46.508 19:10:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:46.508 19:10:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:46.508 19:10:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:46.508 19:10:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:46.508 19:10:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:46.508 19:10:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:46.508 19:10:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:46.508 19:10:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:46.508 19:10:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:46.508 19:10:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:46.508 19:10:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:46.508 19:10:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:46.508 19:10:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:46.508 19:10:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.508 19:10:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.508 19:10:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.508 19:10:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:46.508 "name": "Existed_Raid", 00:12:46.508 "uuid": "18919706-99b0-43a8-a346-c68ad337b496", 00:12:46.508 "strip_size_kb": 0, 00:12:46.508 "state": "configuring", 00:12:46.508 "raid_level": "raid1", 00:12:46.508 "superblock": true, 00:12:46.508 "num_base_bdevs": 4, 00:12:46.508 "num_base_bdevs_discovered": 2, 00:12:46.508 "num_base_bdevs_operational": 4, 00:12:46.508 "base_bdevs_list": [ 00:12:46.508 { 00:12:46.508 "name": "BaseBdev1", 00:12:46.508 "uuid": "8d316578-dbeb-4f1f-87b1-d8225e754c32", 00:12:46.508 "is_configured": true, 00:12:46.508 "data_offset": 2048, 00:12:46.508 "data_size": 63488 00:12:46.508 }, 00:12:46.508 { 00:12:46.508 "name": "BaseBdev2", 00:12:46.508 "uuid": "add8512f-a70a-49ce-9b83-f18040b8e17b", 00:12:46.508 "is_configured": true, 00:12:46.508 "data_offset": 2048, 00:12:46.508 "data_size": 63488 00:12:46.508 }, 00:12:46.508 { 00:12:46.508 "name": "BaseBdev3", 00:12:46.508 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:46.508 "is_configured": false, 00:12:46.508 "data_offset": 0, 00:12:46.509 "data_size": 0 00:12:46.509 }, 00:12:46.509 { 00:12:46.509 "name": "BaseBdev4", 00:12:46.509 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:46.509 "is_configured": false, 00:12:46.509 "data_offset": 0, 00:12:46.509 "data_size": 0 00:12:46.509 } 00:12:46.509 ] 00:12:46.509 }' 00:12:46.509 19:10:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:46.509 19:10:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.078 19:10:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:47.078 19:10:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.078 19:10:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.078 [2024-11-27 19:10:56.510969] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:47.078 BaseBdev3 00:12:47.078 19:10:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.078 19:10:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:12:47.078 19:10:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:47.078 19:10:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:47.078 19:10:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:47.078 19:10:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:47.078 19:10:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:47.078 19:10:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:47.078 19:10:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.078 19:10:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.078 19:10:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.078 19:10:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:47.078 19:10:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.078 19:10:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.078 [ 00:12:47.078 { 00:12:47.078 "name": "BaseBdev3", 00:12:47.078 "aliases": [ 00:12:47.078 "d5371cf2-449d-4a69-91d3-320c6996bd33" 00:12:47.078 ], 00:12:47.078 "product_name": "Malloc disk", 00:12:47.078 "block_size": 512, 00:12:47.078 "num_blocks": 65536, 00:12:47.078 "uuid": "d5371cf2-449d-4a69-91d3-320c6996bd33", 00:12:47.078 "assigned_rate_limits": { 00:12:47.078 "rw_ios_per_sec": 0, 00:12:47.078 "rw_mbytes_per_sec": 0, 00:12:47.078 "r_mbytes_per_sec": 0, 00:12:47.078 "w_mbytes_per_sec": 0 00:12:47.078 }, 00:12:47.079 "claimed": true, 00:12:47.079 "claim_type": "exclusive_write", 00:12:47.079 "zoned": false, 00:12:47.079 "supported_io_types": { 00:12:47.079 "read": true, 00:12:47.079 "write": true, 00:12:47.079 "unmap": true, 00:12:47.079 "flush": true, 00:12:47.079 "reset": true, 00:12:47.079 "nvme_admin": false, 00:12:47.079 "nvme_io": false, 00:12:47.079 "nvme_io_md": false, 00:12:47.079 "write_zeroes": true, 00:12:47.079 "zcopy": true, 00:12:47.079 "get_zone_info": false, 00:12:47.079 "zone_management": false, 00:12:47.079 "zone_append": false, 00:12:47.079 "compare": false, 00:12:47.079 "compare_and_write": false, 00:12:47.079 "abort": true, 00:12:47.079 "seek_hole": false, 00:12:47.079 "seek_data": false, 00:12:47.079 "copy": true, 00:12:47.079 "nvme_iov_md": false 00:12:47.079 }, 00:12:47.079 "memory_domains": [ 00:12:47.079 { 00:12:47.079 "dma_device_id": "system", 00:12:47.079 "dma_device_type": 1 00:12:47.079 }, 00:12:47.079 { 00:12:47.079 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:47.079 "dma_device_type": 2 00:12:47.079 } 00:12:47.079 ], 00:12:47.079 "driver_specific": {} 00:12:47.079 } 00:12:47.079 ] 00:12:47.079 19:10:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.079 19:10:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:47.079 19:10:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:47.079 19:10:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:47.079 19:10:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:47.079 19:10:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:47.079 19:10:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:47.079 19:10:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:47.079 19:10:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:47.079 19:10:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:47.079 19:10:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:47.079 19:10:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:47.079 19:10:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:47.079 19:10:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:47.079 19:10:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:47.079 19:10:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:47.079 19:10:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.079 19:10:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.079 19:10:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.079 19:10:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:47.079 "name": "Existed_Raid", 00:12:47.079 "uuid": "18919706-99b0-43a8-a346-c68ad337b496", 00:12:47.079 "strip_size_kb": 0, 00:12:47.079 "state": "configuring", 00:12:47.079 "raid_level": "raid1", 00:12:47.079 "superblock": true, 00:12:47.079 "num_base_bdevs": 4, 00:12:47.079 "num_base_bdevs_discovered": 3, 00:12:47.079 "num_base_bdevs_operational": 4, 00:12:47.079 "base_bdevs_list": [ 00:12:47.079 { 00:12:47.079 "name": "BaseBdev1", 00:12:47.079 "uuid": "8d316578-dbeb-4f1f-87b1-d8225e754c32", 00:12:47.079 "is_configured": true, 00:12:47.079 "data_offset": 2048, 00:12:47.079 "data_size": 63488 00:12:47.079 }, 00:12:47.079 { 00:12:47.079 "name": "BaseBdev2", 00:12:47.079 "uuid": "add8512f-a70a-49ce-9b83-f18040b8e17b", 00:12:47.079 "is_configured": true, 00:12:47.079 "data_offset": 2048, 00:12:47.079 "data_size": 63488 00:12:47.079 }, 00:12:47.079 { 00:12:47.079 "name": "BaseBdev3", 00:12:47.079 "uuid": "d5371cf2-449d-4a69-91d3-320c6996bd33", 00:12:47.079 "is_configured": true, 00:12:47.079 "data_offset": 2048, 00:12:47.079 "data_size": 63488 00:12:47.079 }, 00:12:47.079 { 00:12:47.079 "name": "BaseBdev4", 00:12:47.079 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:47.079 "is_configured": false, 00:12:47.079 "data_offset": 0, 00:12:47.079 "data_size": 0 00:12:47.079 } 00:12:47.079 ] 00:12:47.079 }' 00:12:47.079 19:10:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:47.079 19:10:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.649 19:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:47.649 19:10:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.649 19:10:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.649 [2024-11-27 19:10:57.054640] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:47.649 [2024-11-27 19:10:57.054992] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:47.649 [2024-11-27 19:10:57.055010] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:47.649 [2024-11-27 19:10:57.055331] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:47.649 [2024-11-27 19:10:57.055540] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:47.649 [2024-11-27 19:10:57.055554] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:12:47.649 [2024-11-27 19:10:57.055728] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:47.649 BaseBdev4 00:12:47.649 19:10:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.649 19:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:12:47.649 19:10:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:12:47.649 19:10:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:47.649 19:10:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:47.649 19:10:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:47.649 19:10:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:47.649 19:10:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:47.649 19:10:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.649 19:10:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.649 19:10:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.649 19:10:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:47.649 19:10:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.649 19:10:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.649 [ 00:12:47.649 { 00:12:47.649 "name": "BaseBdev4", 00:12:47.649 "aliases": [ 00:12:47.649 "dc1badbf-7abf-4d8c-a600-a9804ba16d89" 00:12:47.649 ], 00:12:47.649 "product_name": "Malloc disk", 00:12:47.649 "block_size": 512, 00:12:47.649 "num_blocks": 65536, 00:12:47.649 "uuid": "dc1badbf-7abf-4d8c-a600-a9804ba16d89", 00:12:47.649 "assigned_rate_limits": { 00:12:47.649 "rw_ios_per_sec": 0, 00:12:47.649 "rw_mbytes_per_sec": 0, 00:12:47.649 "r_mbytes_per_sec": 0, 00:12:47.649 "w_mbytes_per_sec": 0 00:12:47.649 }, 00:12:47.649 "claimed": true, 00:12:47.649 "claim_type": "exclusive_write", 00:12:47.649 "zoned": false, 00:12:47.649 "supported_io_types": { 00:12:47.649 "read": true, 00:12:47.649 "write": true, 00:12:47.649 "unmap": true, 00:12:47.649 "flush": true, 00:12:47.649 "reset": true, 00:12:47.649 "nvme_admin": false, 00:12:47.649 "nvme_io": false, 00:12:47.649 "nvme_io_md": false, 00:12:47.649 "write_zeroes": true, 00:12:47.649 "zcopy": true, 00:12:47.649 "get_zone_info": false, 00:12:47.649 "zone_management": false, 00:12:47.649 "zone_append": false, 00:12:47.649 "compare": false, 00:12:47.649 "compare_and_write": false, 00:12:47.649 "abort": true, 00:12:47.649 "seek_hole": false, 00:12:47.649 "seek_data": false, 00:12:47.649 "copy": true, 00:12:47.649 "nvme_iov_md": false 00:12:47.649 }, 00:12:47.649 "memory_domains": [ 00:12:47.649 { 00:12:47.649 "dma_device_id": "system", 00:12:47.649 "dma_device_type": 1 00:12:47.649 }, 00:12:47.649 { 00:12:47.649 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:47.649 "dma_device_type": 2 00:12:47.649 } 00:12:47.649 ], 00:12:47.649 "driver_specific": {} 00:12:47.649 } 00:12:47.649 ] 00:12:47.649 19:10:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.649 19:10:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:47.649 19:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:47.649 19:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:47.649 19:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:12:47.649 19:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:47.649 19:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:47.649 19:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:47.649 19:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:47.649 19:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:47.649 19:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:47.649 19:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:47.649 19:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:47.649 19:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:47.649 19:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:47.649 19:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:47.649 19:10:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.649 19:10:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.649 19:10:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.649 19:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:47.649 "name": "Existed_Raid", 00:12:47.649 "uuid": "18919706-99b0-43a8-a346-c68ad337b496", 00:12:47.649 "strip_size_kb": 0, 00:12:47.649 "state": "online", 00:12:47.649 "raid_level": "raid1", 00:12:47.649 "superblock": true, 00:12:47.649 "num_base_bdevs": 4, 00:12:47.649 "num_base_bdevs_discovered": 4, 00:12:47.649 "num_base_bdevs_operational": 4, 00:12:47.649 "base_bdevs_list": [ 00:12:47.649 { 00:12:47.649 "name": "BaseBdev1", 00:12:47.649 "uuid": "8d316578-dbeb-4f1f-87b1-d8225e754c32", 00:12:47.649 "is_configured": true, 00:12:47.649 "data_offset": 2048, 00:12:47.649 "data_size": 63488 00:12:47.649 }, 00:12:47.649 { 00:12:47.649 "name": "BaseBdev2", 00:12:47.649 "uuid": "add8512f-a70a-49ce-9b83-f18040b8e17b", 00:12:47.649 "is_configured": true, 00:12:47.649 "data_offset": 2048, 00:12:47.649 "data_size": 63488 00:12:47.649 }, 00:12:47.649 { 00:12:47.649 "name": "BaseBdev3", 00:12:47.649 "uuid": "d5371cf2-449d-4a69-91d3-320c6996bd33", 00:12:47.649 "is_configured": true, 00:12:47.649 "data_offset": 2048, 00:12:47.649 "data_size": 63488 00:12:47.649 }, 00:12:47.649 { 00:12:47.649 "name": "BaseBdev4", 00:12:47.649 "uuid": "dc1badbf-7abf-4d8c-a600-a9804ba16d89", 00:12:47.649 "is_configured": true, 00:12:47.649 "data_offset": 2048, 00:12:47.649 "data_size": 63488 00:12:47.649 } 00:12:47.649 ] 00:12:47.649 }' 00:12:47.649 19:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:47.649 19:10:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.909 19:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:12:47.909 19:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:47.909 19:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:47.909 19:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:47.909 19:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:12:47.909 19:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:47.909 19:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:47.909 19:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:47.909 19:10:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.909 19:10:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.909 [2024-11-27 19:10:57.530182] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:48.169 19:10:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.169 19:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:48.169 "name": "Existed_Raid", 00:12:48.169 "aliases": [ 00:12:48.169 "18919706-99b0-43a8-a346-c68ad337b496" 00:12:48.169 ], 00:12:48.169 "product_name": "Raid Volume", 00:12:48.169 "block_size": 512, 00:12:48.169 "num_blocks": 63488, 00:12:48.169 "uuid": "18919706-99b0-43a8-a346-c68ad337b496", 00:12:48.169 "assigned_rate_limits": { 00:12:48.169 "rw_ios_per_sec": 0, 00:12:48.169 "rw_mbytes_per_sec": 0, 00:12:48.169 "r_mbytes_per_sec": 0, 00:12:48.169 "w_mbytes_per_sec": 0 00:12:48.169 }, 00:12:48.169 "claimed": false, 00:12:48.169 "zoned": false, 00:12:48.169 "supported_io_types": { 00:12:48.169 "read": true, 00:12:48.169 "write": true, 00:12:48.169 "unmap": false, 00:12:48.169 "flush": false, 00:12:48.169 "reset": true, 00:12:48.169 "nvme_admin": false, 00:12:48.169 "nvme_io": false, 00:12:48.169 "nvme_io_md": false, 00:12:48.169 "write_zeroes": true, 00:12:48.169 "zcopy": false, 00:12:48.169 "get_zone_info": false, 00:12:48.169 "zone_management": false, 00:12:48.169 "zone_append": false, 00:12:48.169 "compare": false, 00:12:48.169 "compare_and_write": false, 00:12:48.169 "abort": false, 00:12:48.169 "seek_hole": false, 00:12:48.169 "seek_data": false, 00:12:48.169 "copy": false, 00:12:48.169 "nvme_iov_md": false 00:12:48.169 }, 00:12:48.169 "memory_domains": [ 00:12:48.169 { 00:12:48.169 "dma_device_id": "system", 00:12:48.169 "dma_device_type": 1 00:12:48.169 }, 00:12:48.169 { 00:12:48.169 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:48.169 "dma_device_type": 2 00:12:48.169 }, 00:12:48.169 { 00:12:48.169 "dma_device_id": "system", 00:12:48.169 "dma_device_type": 1 00:12:48.169 }, 00:12:48.169 { 00:12:48.169 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:48.169 "dma_device_type": 2 00:12:48.169 }, 00:12:48.169 { 00:12:48.169 "dma_device_id": "system", 00:12:48.169 "dma_device_type": 1 00:12:48.169 }, 00:12:48.169 { 00:12:48.169 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:48.169 "dma_device_type": 2 00:12:48.169 }, 00:12:48.169 { 00:12:48.169 "dma_device_id": "system", 00:12:48.169 "dma_device_type": 1 00:12:48.169 }, 00:12:48.169 { 00:12:48.169 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:48.170 "dma_device_type": 2 00:12:48.170 } 00:12:48.170 ], 00:12:48.170 "driver_specific": { 00:12:48.170 "raid": { 00:12:48.170 "uuid": "18919706-99b0-43a8-a346-c68ad337b496", 00:12:48.170 "strip_size_kb": 0, 00:12:48.170 "state": "online", 00:12:48.170 "raid_level": "raid1", 00:12:48.170 "superblock": true, 00:12:48.170 "num_base_bdevs": 4, 00:12:48.170 "num_base_bdevs_discovered": 4, 00:12:48.170 "num_base_bdevs_operational": 4, 00:12:48.170 "base_bdevs_list": [ 00:12:48.170 { 00:12:48.170 "name": "BaseBdev1", 00:12:48.170 "uuid": "8d316578-dbeb-4f1f-87b1-d8225e754c32", 00:12:48.170 "is_configured": true, 00:12:48.170 "data_offset": 2048, 00:12:48.170 "data_size": 63488 00:12:48.170 }, 00:12:48.170 { 00:12:48.170 "name": "BaseBdev2", 00:12:48.170 "uuid": "add8512f-a70a-49ce-9b83-f18040b8e17b", 00:12:48.170 "is_configured": true, 00:12:48.170 "data_offset": 2048, 00:12:48.170 "data_size": 63488 00:12:48.170 }, 00:12:48.170 { 00:12:48.170 "name": "BaseBdev3", 00:12:48.170 "uuid": "d5371cf2-449d-4a69-91d3-320c6996bd33", 00:12:48.170 "is_configured": true, 00:12:48.170 "data_offset": 2048, 00:12:48.170 "data_size": 63488 00:12:48.170 }, 00:12:48.170 { 00:12:48.170 "name": "BaseBdev4", 00:12:48.170 "uuid": "dc1badbf-7abf-4d8c-a600-a9804ba16d89", 00:12:48.170 "is_configured": true, 00:12:48.170 "data_offset": 2048, 00:12:48.170 "data_size": 63488 00:12:48.170 } 00:12:48.170 ] 00:12:48.170 } 00:12:48.170 } 00:12:48.170 }' 00:12:48.170 19:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:48.170 19:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:48.170 BaseBdev2 00:12:48.170 BaseBdev3 00:12:48.170 BaseBdev4' 00:12:48.170 19:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:48.170 19:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:48.170 19:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:48.170 19:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:48.170 19:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:48.170 19:10:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.170 19:10:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:48.170 19:10:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.170 19:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:48.170 19:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:48.170 19:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:48.170 19:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:48.170 19:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:48.170 19:10:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.170 19:10:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:48.170 19:10:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.170 19:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:48.170 19:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:48.170 19:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:48.170 19:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:48.170 19:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:48.170 19:10:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.170 19:10:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:48.170 19:10:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.170 19:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:48.170 19:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:48.170 19:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:48.429 19:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:48.429 19:10:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.429 19:10:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:48.429 19:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:48.429 19:10:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.429 19:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:48.429 19:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:48.429 19:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:48.429 19:10:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.429 19:10:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:48.429 [2024-11-27 19:10:57.861413] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:48.429 19:10:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.429 19:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:48.429 19:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:12:48.429 19:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:48.429 19:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:12:48.429 19:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:12:48.429 19:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:12:48.429 19:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:48.429 19:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:48.429 19:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:48.429 19:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:48.429 19:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:48.429 19:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:48.429 19:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:48.429 19:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:48.429 19:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:48.429 19:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:48.429 19:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:48.429 19:10:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.429 19:10:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:48.429 19:10:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.430 19:10:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:48.430 "name": "Existed_Raid", 00:12:48.430 "uuid": "18919706-99b0-43a8-a346-c68ad337b496", 00:12:48.430 "strip_size_kb": 0, 00:12:48.430 "state": "online", 00:12:48.430 "raid_level": "raid1", 00:12:48.430 "superblock": true, 00:12:48.430 "num_base_bdevs": 4, 00:12:48.430 "num_base_bdevs_discovered": 3, 00:12:48.430 "num_base_bdevs_operational": 3, 00:12:48.430 "base_bdevs_list": [ 00:12:48.430 { 00:12:48.430 "name": null, 00:12:48.430 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:48.430 "is_configured": false, 00:12:48.430 "data_offset": 0, 00:12:48.430 "data_size": 63488 00:12:48.430 }, 00:12:48.430 { 00:12:48.430 "name": "BaseBdev2", 00:12:48.430 "uuid": "add8512f-a70a-49ce-9b83-f18040b8e17b", 00:12:48.430 "is_configured": true, 00:12:48.430 "data_offset": 2048, 00:12:48.430 "data_size": 63488 00:12:48.430 }, 00:12:48.430 { 00:12:48.430 "name": "BaseBdev3", 00:12:48.430 "uuid": "d5371cf2-449d-4a69-91d3-320c6996bd33", 00:12:48.430 "is_configured": true, 00:12:48.430 "data_offset": 2048, 00:12:48.430 "data_size": 63488 00:12:48.430 }, 00:12:48.430 { 00:12:48.430 "name": "BaseBdev4", 00:12:48.430 "uuid": "dc1badbf-7abf-4d8c-a600-a9804ba16d89", 00:12:48.430 "is_configured": true, 00:12:48.430 "data_offset": 2048, 00:12:48.430 "data_size": 63488 00:12:48.430 } 00:12:48.430 ] 00:12:48.430 }' 00:12:48.430 19:10:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:48.430 19:10:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:48.999 19:10:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:48.999 19:10:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:49.000 19:10:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:49.000 19:10:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:49.000 19:10:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.000 19:10:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.000 19:10:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.000 19:10:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:49.000 19:10:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:49.000 19:10:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:49.000 19:10:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.000 19:10:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.000 [2024-11-27 19:10:58.488431] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:49.000 19:10:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.000 19:10:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:49.000 19:10:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:49.000 19:10:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:49.000 19:10:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:49.000 19:10:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.000 19:10:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.000 19:10:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.260 19:10:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:49.260 19:10:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:49.260 19:10:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:12:49.260 19:10:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.260 19:10:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.260 [2024-11-27 19:10:58.645697] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:49.260 19:10:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.260 19:10:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:49.260 19:10:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:49.260 19:10:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:49.260 19:10:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:49.260 19:10:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.260 19:10:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.260 19:10:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.260 19:10:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:49.260 19:10:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:49.260 19:10:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:12:49.260 19:10:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.260 19:10:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.260 [2024-11-27 19:10:58.808216] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:12:49.260 [2024-11-27 19:10:58.808340] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:49.520 [2024-11-27 19:10:58.915398] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:49.520 [2024-11-27 19:10:58.915572] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:49.520 [2024-11-27 19:10:58.915622] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:12:49.520 19:10:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.520 19:10:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:49.520 19:10:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:49.520 19:10:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:49.520 19:10:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:49.520 19:10:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.520 19:10:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.520 19:10:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.520 19:10:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:49.520 19:10:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:49.520 19:10:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:12:49.520 19:10:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:12:49.520 19:10:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:49.520 19:10:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:49.520 19:10:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.520 19:10:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.520 BaseBdev2 00:12:49.520 19:10:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.520 19:10:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:12:49.520 19:10:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:49.520 19:10:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:49.520 19:10:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:49.520 19:10:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:49.520 19:10:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:49.520 19:10:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:49.520 19:10:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.520 19:10:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.520 19:10:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.520 19:10:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:49.520 19:10:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.520 19:10:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.520 [ 00:12:49.520 { 00:12:49.520 "name": "BaseBdev2", 00:12:49.520 "aliases": [ 00:12:49.520 "84cef970-bcf0-4fb3-bcc6-f206ee256309" 00:12:49.520 ], 00:12:49.520 "product_name": "Malloc disk", 00:12:49.520 "block_size": 512, 00:12:49.520 "num_blocks": 65536, 00:12:49.520 "uuid": "84cef970-bcf0-4fb3-bcc6-f206ee256309", 00:12:49.520 "assigned_rate_limits": { 00:12:49.520 "rw_ios_per_sec": 0, 00:12:49.520 "rw_mbytes_per_sec": 0, 00:12:49.520 "r_mbytes_per_sec": 0, 00:12:49.520 "w_mbytes_per_sec": 0 00:12:49.520 }, 00:12:49.520 "claimed": false, 00:12:49.520 "zoned": false, 00:12:49.520 "supported_io_types": { 00:12:49.520 "read": true, 00:12:49.520 "write": true, 00:12:49.520 "unmap": true, 00:12:49.520 "flush": true, 00:12:49.520 "reset": true, 00:12:49.520 "nvme_admin": false, 00:12:49.520 "nvme_io": false, 00:12:49.520 "nvme_io_md": false, 00:12:49.520 "write_zeroes": true, 00:12:49.520 "zcopy": true, 00:12:49.520 "get_zone_info": false, 00:12:49.520 "zone_management": false, 00:12:49.520 "zone_append": false, 00:12:49.520 "compare": false, 00:12:49.520 "compare_and_write": false, 00:12:49.520 "abort": true, 00:12:49.520 "seek_hole": false, 00:12:49.520 "seek_data": false, 00:12:49.520 "copy": true, 00:12:49.520 "nvme_iov_md": false 00:12:49.520 }, 00:12:49.520 "memory_domains": [ 00:12:49.520 { 00:12:49.520 "dma_device_id": "system", 00:12:49.520 "dma_device_type": 1 00:12:49.520 }, 00:12:49.520 { 00:12:49.520 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:49.520 "dma_device_type": 2 00:12:49.520 } 00:12:49.520 ], 00:12:49.520 "driver_specific": {} 00:12:49.520 } 00:12:49.520 ] 00:12:49.520 19:10:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.520 19:10:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:49.520 19:10:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:49.520 19:10:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:49.520 19:10:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:49.520 19:10:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.520 19:10:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.520 BaseBdev3 00:12:49.520 19:10:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.520 19:10:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:12:49.520 19:10:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:49.520 19:10:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:49.520 19:10:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:49.520 19:10:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:49.520 19:10:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:49.520 19:10:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:49.521 19:10:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.521 19:10:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.521 19:10:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.521 19:10:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:49.521 19:10:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.521 19:10:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.521 [ 00:12:49.521 { 00:12:49.521 "name": "BaseBdev3", 00:12:49.521 "aliases": [ 00:12:49.521 "7b339006-44ff-405f-acd0-d9f42ec82775" 00:12:49.521 ], 00:12:49.521 "product_name": "Malloc disk", 00:12:49.521 "block_size": 512, 00:12:49.521 "num_blocks": 65536, 00:12:49.521 "uuid": "7b339006-44ff-405f-acd0-d9f42ec82775", 00:12:49.521 "assigned_rate_limits": { 00:12:49.521 "rw_ios_per_sec": 0, 00:12:49.521 "rw_mbytes_per_sec": 0, 00:12:49.521 "r_mbytes_per_sec": 0, 00:12:49.521 "w_mbytes_per_sec": 0 00:12:49.521 }, 00:12:49.521 "claimed": false, 00:12:49.521 "zoned": false, 00:12:49.521 "supported_io_types": { 00:12:49.521 "read": true, 00:12:49.521 "write": true, 00:12:49.521 "unmap": true, 00:12:49.521 "flush": true, 00:12:49.521 "reset": true, 00:12:49.521 "nvme_admin": false, 00:12:49.521 "nvme_io": false, 00:12:49.521 "nvme_io_md": false, 00:12:49.521 "write_zeroes": true, 00:12:49.521 "zcopy": true, 00:12:49.521 "get_zone_info": false, 00:12:49.521 "zone_management": false, 00:12:49.521 "zone_append": false, 00:12:49.521 "compare": false, 00:12:49.521 "compare_and_write": false, 00:12:49.521 "abort": true, 00:12:49.521 "seek_hole": false, 00:12:49.521 "seek_data": false, 00:12:49.521 "copy": true, 00:12:49.521 "nvme_iov_md": false 00:12:49.521 }, 00:12:49.521 "memory_domains": [ 00:12:49.521 { 00:12:49.521 "dma_device_id": "system", 00:12:49.521 "dma_device_type": 1 00:12:49.521 }, 00:12:49.521 { 00:12:49.521 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:49.521 "dma_device_type": 2 00:12:49.521 } 00:12:49.521 ], 00:12:49.521 "driver_specific": {} 00:12:49.521 } 00:12:49.521 ] 00:12:49.521 19:10:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.521 19:10:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:49.521 19:10:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:49.521 19:10:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:49.521 19:10:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:49.521 19:10:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.521 19:10:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.779 BaseBdev4 00:12:49.779 19:10:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.779 19:10:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:12:49.779 19:10:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:12:49.779 19:10:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:49.779 19:10:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:49.779 19:10:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:49.779 19:10:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:49.779 19:10:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:49.779 19:10:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.779 19:10:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.779 19:10:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.779 19:10:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:49.779 19:10:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.779 19:10:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.779 [ 00:12:49.779 { 00:12:49.779 "name": "BaseBdev4", 00:12:49.779 "aliases": [ 00:12:49.779 "f2b5d771-af41-44b7-b0db-83eab26d875d" 00:12:49.779 ], 00:12:49.779 "product_name": "Malloc disk", 00:12:49.779 "block_size": 512, 00:12:49.779 "num_blocks": 65536, 00:12:49.779 "uuid": "f2b5d771-af41-44b7-b0db-83eab26d875d", 00:12:49.779 "assigned_rate_limits": { 00:12:49.779 "rw_ios_per_sec": 0, 00:12:49.779 "rw_mbytes_per_sec": 0, 00:12:49.779 "r_mbytes_per_sec": 0, 00:12:49.779 "w_mbytes_per_sec": 0 00:12:49.779 }, 00:12:49.779 "claimed": false, 00:12:49.779 "zoned": false, 00:12:49.779 "supported_io_types": { 00:12:49.779 "read": true, 00:12:49.779 "write": true, 00:12:49.779 "unmap": true, 00:12:49.779 "flush": true, 00:12:49.779 "reset": true, 00:12:49.779 "nvme_admin": false, 00:12:49.779 "nvme_io": false, 00:12:49.779 "nvme_io_md": false, 00:12:49.779 "write_zeroes": true, 00:12:49.779 "zcopy": true, 00:12:49.779 "get_zone_info": false, 00:12:49.779 "zone_management": false, 00:12:49.779 "zone_append": false, 00:12:49.779 "compare": false, 00:12:49.779 "compare_and_write": false, 00:12:49.779 "abort": true, 00:12:49.779 "seek_hole": false, 00:12:49.779 "seek_data": false, 00:12:49.779 "copy": true, 00:12:49.779 "nvme_iov_md": false 00:12:49.779 }, 00:12:49.779 "memory_domains": [ 00:12:49.779 { 00:12:49.779 "dma_device_id": "system", 00:12:49.779 "dma_device_type": 1 00:12:49.779 }, 00:12:49.779 { 00:12:49.779 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:49.779 "dma_device_type": 2 00:12:49.779 } 00:12:49.779 ], 00:12:49.779 "driver_specific": {} 00:12:49.779 } 00:12:49.779 ] 00:12:49.779 19:10:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.779 19:10:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:49.779 19:10:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:49.779 19:10:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:49.779 19:10:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:49.779 19:10:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.779 19:10:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.779 [2024-11-27 19:10:59.232308] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:49.779 [2024-11-27 19:10:59.232415] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:49.779 [2024-11-27 19:10:59.232462] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:49.779 [2024-11-27 19:10:59.234593] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:49.780 [2024-11-27 19:10:59.234688] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:49.780 19:10:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.780 19:10:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:49.780 19:10:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:49.780 19:10:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:49.780 19:10:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:49.780 19:10:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:49.780 19:10:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:49.780 19:10:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:49.780 19:10:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:49.780 19:10:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:49.780 19:10:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:49.780 19:10:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:49.780 19:10:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:49.780 19:10:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.780 19:10:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.780 19:10:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.780 19:10:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:49.780 "name": "Existed_Raid", 00:12:49.780 "uuid": "55182ecd-f8ea-44e4-ae10-902f4c7ec980", 00:12:49.780 "strip_size_kb": 0, 00:12:49.780 "state": "configuring", 00:12:49.780 "raid_level": "raid1", 00:12:49.780 "superblock": true, 00:12:49.780 "num_base_bdevs": 4, 00:12:49.780 "num_base_bdevs_discovered": 3, 00:12:49.780 "num_base_bdevs_operational": 4, 00:12:49.780 "base_bdevs_list": [ 00:12:49.780 { 00:12:49.780 "name": "BaseBdev1", 00:12:49.780 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:49.780 "is_configured": false, 00:12:49.780 "data_offset": 0, 00:12:49.780 "data_size": 0 00:12:49.780 }, 00:12:49.780 { 00:12:49.780 "name": "BaseBdev2", 00:12:49.780 "uuid": "84cef970-bcf0-4fb3-bcc6-f206ee256309", 00:12:49.780 "is_configured": true, 00:12:49.780 "data_offset": 2048, 00:12:49.780 "data_size": 63488 00:12:49.780 }, 00:12:49.780 { 00:12:49.780 "name": "BaseBdev3", 00:12:49.780 "uuid": "7b339006-44ff-405f-acd0-d9f42ec82775", 00:12:49.780 "is_configured": true, 00:12:49.780 "data_offset": 2048, 00:12:49.780 "data_size": 63488 00:12:49.780 }, 00:12:49.780 { 00:12:49.780 "name": "BaseBdev4", 00:12:49.780 "uuid": "f2b5d771-af41-44b7-b0db-83eab26d875d", 00:12:49.780 "is_configured": true, 00:12:49.780 "data_offset": 2048, 00:12:49.780 "data_size": 63488 00:12:49.780 } 00:12:49.780 ] 00:12:49.780 }' 00:12:49.780 19:10:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:49.780 19:10:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.351 19:10:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:50.351 19:10:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.351 19:10:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.351 [2024-11-27 19:10:59.715518] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:50.351 19:10:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.351 19:10:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:50.351 19:10:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:50.351 19:10:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:50.351 19:10:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:50.351 19:10:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:50.351 19:10:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:50.351 19:10:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:50.351 19:10:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:50.351 19:10:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:50.351 19:10:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:50.351 19:10:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:50.351 19:10:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.351 19:10:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.351 19:10:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.351 19:10:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.351 19:10:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:50.351 "name": "Existed_Raid", 00:12:50.351 "uuid": "55182ecd-f8ea-44e4-ae10-902f4c7ec980", 00:12:50.351 "strip_size_kb": 0, 00:12:50.351 "state": "configuring", 00:12:50.351 "raid_level": "raid1", 00:12:50.351 "superblock": true, 00:12:50.351 "num_base_bdevs": 4, 00:12:50.351 "num_base_bdevs_discovered": 2, 00:12:50.351 "num_base_bdevs_operational": 4, 00:12:50.351 "base_bdevs_list": [ 00:12:50.351 { 00:12:50.351 "name": "BaseBdev1", 00:12:50.351 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:50.351 "is_configured": false, 00:12:50.351 "data_offset": 0, 00:12:50.351 "data_size": 0 00:12:50.351 }, 00:12:50.351 { 00:12:50.351 "name": null, 00:12:50.351 "uuid": "84cef970-bcf0-4fb3-bcc6-f206ee256309", 00:12:50.351 "is_configured": false, 00:12:50.351 "data_offset": 0, 00:12:50.351 "data_size": 63488 00:12:50.351 }, 00:12:50.351 { 00:12:50.351 "name": "BaseBdev3", 00:12:50.351 "uuid": "7b339006-44ff-405f-acd0-d9f42ec82775", 00:12:50.351 "is_configured": true, 00:12:50.351 "data_offset": 2048, 00:12:50.351 "data_size": 63488 00:12:50.351 }, 00:12:50.351 { 00:12:50.351 "name": "BaseBdev4", 00:12:50.351 "uuid": "f2b5d771-af41-44b7-b0db-83eab26d875d", 00:12:50.351 "is_configured": true, 00:12:50.351 "data_offset": 2048, 00:12:50.351 "data_size": 63488 00:12:50.351 } 00:12:50.351 ] 00:12:50.351 }' 00:12:50.351 19:10:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:50.351 19:10:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.635 19:11:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.635 19:11:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.635 19:11:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.635 19:11:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:50.635 19:11:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.635 19:11:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:12:50.635 19:11:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:50.635 19:11:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.635 19:11:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.635 [2024-11-27 19:11:00.238623] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:50.635 BaseBdev1 00:12:50.635 19:11:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.636 19:11:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:12:50.636 19:11:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:50.636 19:11:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:50.636 19:11:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:50.636 19:11:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:50.636 19:11:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:50.636 19:11:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:50.636 19:11:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.636 19:11:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.636 19:11:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.636 19:11:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:50.636 19:11:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.636 19:11:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.636 [ 00:12:50.636 { 00:12:50.636 "name": "BaseBdev1", 00:12:50.636 "aliases": [ 00:12:50.636 "b40c3323-dd3a-4049-8e5a-cf69d995ca26" 00:12:50.636 ], 00:12:50.636 "product_name": "Malloc disk", 00:12:50.636 "block_size": 512, 00:12:50.636 "num_blocks": 65536, 00:12:50.636 "uuid": "b40c3323-dd3a-4049-8e5a-cf69d995ca26", 00:12:50.636 "assigned_rate_limits": { 00:12:50.636 "rw_ios_per_sec": 0, 00:12:50.636 "rw_mbytes_per_sec": 0, 00:12:50.636 "r_mbytes_per_sec": 0, 00:12:50.636 "w_mbytes_per_sec": 0 00:12:50.636 }, 00:12:50.636 "claimed": true, 00:12:50.636 "claim_type": "exclusive_write", 00:12:50.636 "zoned": false, 00:12:50.636 "supported_io_types": { 00:12:50.636 "read": true, 00:12:50.896 "write": true, 00:12:50.896 "unmap": true, 00:12:50.896 "flush": true, 00:12:50.896 "reset": true, 00:12:50.896 "nvme_admin": false, 00:12:50.896 "nvme_io": false, 00:12:50.896 "nvme_io_md": false, 00:12:50.896 "write_zeroes": true, 00:12:50.896 "zcopy": true, 00:12:50.896 "get_zone_info": false, 00:12:50.896 "zone_management": false, 00:12:50.896 "zone_append": false, 00:12:50.896 "compare": false, 00:12:50.896 "compare_and_write": false, 00:12:50.896 "abort": true, 00:12:50.896 "seek_hole": false, 00:12:50.896 "seek_data": false, 00:12:50.896 "copy": true, 00:12:50.896 "nvme_iov_md": false 00:12:50.896 }, 00:12:50.896 "memory_domains": [ 00:12:50.896 { 00:12:50.896 "dma_device_id": "system", 00:12:50.896 "dma_device_type": 1 00:12:50.896 }, 00:12:50.896 { 00:12:50.896 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:50.896 "dma_device_type": 2 00:12:50.896 } 00:12:50.896 ], 00:12:50.896 "driver_specific": {} 00:12:50.896 } 00:12:50.896 ] 00:12:50.896 19:11:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.896 19:11:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:50.896 19:11:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:50.896 19:11:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:50.896 19:11:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:50.896 19:11:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:50.896 19:11:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:50.896 19:11:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:50.896 19:11:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:50.896 19:11:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:50.896 19:11:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:50.896 19:11:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:50.896 19:11:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.896 19:11:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:50.896 19:11:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.896 19:11:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.896 19:11:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.896 19:11:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:50.896 "name": "Existed_Raid", 00:12:50.896 "uuid": "55182ecd-f8ea-44e4-ae10-902f4c7ec980", 00:12:50.896 "strip_size_kb": 0, 00:12:50.896 "state": "configuring", 00:12:50.896 "raid_level": "raid1", 00:12:50.896 "superblock": true, 00:12:50.896 "num_base_bdevs": 4, 00:12:50.896 "num_base_bdevs_discovered": 3, 00:12:50.896 "num_base_bdevs_operational": 4, 00:12:50.896 "base_bdevs_list": [ 00:12:50.896 { 00:12:50.896 "name": "BaseBdev1", 00:12:50.896 "uuid": "b40c3323-dd3a-4049-8e5a-cf69d995ca26", 00:12:50.896 "is_configured": true, 00:12:50.896 "data_offset": 2048, 00:12:50.896 "data_size": 63488 00:12:50.896 }, 00:12:50.896 { 00:12:50.896 "name": null, 00:12:50.896 "uuid": "84cef970-bcf0-4fb3-bcc6-f206ee256309", 00:12:50.896 "is_configured": false, 00:12:50.896 "data_offset": 0, 00:12:50.896 "data_size": 63488 00:12:50.896 }, 00:12:50.896 { 00:12:50.896 "name": "BaseBdev3", 00:12:50.896 "uuid": "7b339006-44ff-405f-acd0-d9f42ec82775", 00:12:50.896 "is_configured": true, 00:12:50.896 "data_offset": 2048, 00:12:50.896 "data_size": 63488 00:12:50.896 }, 00:12:50.896 { 00:12:50.896 "name": "BaseBdev4", 00:12:50.896 "uuid": "f2b5d771-af41-44b7-b0db-83eab26d875d", 00:12:50.896 "is_configured": true, 00:12:50.896 "data_offset": 2048, 00:12:50.896 "data_size": 63488 00:12:50.896 } 00:12:50.896 ] 00:12:50.896 }' 00:12:50.896 19:11:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:50.896 19:11:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.156 19:11:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:51.156 19:11:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.156 19:11:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:51.156 19:11:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.156 19:11:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.417 19:11:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:12:51.417 19:11:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:12:51.417 19:11:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.417 19:11:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.417 [2024-11-27 19:11:00.809772] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:51.417 19:11:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.417 19:11:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:51.417 19:11:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:51.417 19:11:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:51.417 19:11:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:51.417 19:11:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:51.417 19:11:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:51.417 19:11:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:51.417 19:11:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:51.417 19:11:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:51.417 19:11:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:51.417 19:11:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:51.417 19:11:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:51.417 19:11:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.417 19:11:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.417 19:11:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.417 19:11:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:51.417 "name": "Existed_Raid", 00:12:51.417 "uuid": "55182ecd-f8ea-44e4-ae10-902f4c7ec980", 00:12:51.417 "strip_size_kb": 0, 00:12:51.417 "state": "configuring", 00:12:51.417 "raid_level": "raid1", 00:12:51.417 "superblock": true, 00:12:51.417 "num_base_bdevs": 4, 00:12:51.417 "num_base_bdevs_discovered": 2, 00:12:51.417 "num_base_bdevs_operational": 4, 00:12:51.417 "base_bdevs_list": [ 00:12:51.417 { 00:12:51.417 "name": "BaseBdev1", 00:12:51.417 "uuid": "b40c3323-dd3a-4049-8e5a-cf69d995ca26", 00:12:51.417 "is_configured": true, 00:12:51.417 "data_offset": 2048, 00:12:51.417 "data_size": 63488 00:12:51.417 }, 00:12:51.417 { 00:12:51.417 "name": null, 00:12:51.417 "uuid": "84cef970-bcf0-4fb3-bcc6-f206ee256309", 00:12:51.417 "is_configured": false, 00:12:51.417 "data_offset": 0, 00:12:51.417 "data_size": 63488 00:12:51.417 }, 00:12:51.417 { 00:12:51.417 "name": null, 00:12:51.417 "uuid": "7b339006-44ff-405f-acd0-d9f42ec82775", 00:12:51.417 "is_configured": false, 00:12:51.417 "data_offset": 0, 00:12:51.417 "data_size": 63488 00:12:51.417 }, 00:12:51.417 { 00:12:51.417 "name": "BaseBdev4", 00:12:51.417 "uuid": "f2b5d771-af41-44b7-b0db-83eab26d875d", 00:12:51.417 "is_configured": true, 00:12:51.417 "data_offset": 2048, 00:12:51.417 "data_size": 63488 00:12:51.417 } 00:12:51.417 ] 00:12:51.417 }' 00:12:51.417 19:11:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:51.417 19:11:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.677 19:11:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:51.677 19:11:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.677 19:11:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:51.677 19:11:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.677 19:11:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.677 19:11:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:12:51.677 19:11:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:51.677 19:11:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.677 19:11:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.677 [2024-11-27 19:11:01.233024] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:51.677 19:11:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.677 19:11:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:51.677 19:11:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:51.677 19:11:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:51.677 19:11:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:51.677 19:11:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:51.677 19:11:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:51.677 19:11:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:51.677 19:11:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:51.677 19:11:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:51.677 19:11:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:51.677 19:11:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:51.677 19:11:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.677 19:11:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.677 19:11:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:51.677 19:11:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.677 19:11:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:51.678 "name": "Existed_Raid", 00:12:51.678 "uuid": "55182ecd-f8ea-44e4-ae10-902f4c7ec980", 00:12:51.678 "strip_size_kb": 0, 00:12:51.678 "state": "configuring", 00:12:51.678 "raid_level": "raid1", 00:12:51.678 "superblock": true, 00:12:51.678 "num_base_bdevs": 4, 00:12:51.678 "num_base_bdevs_discovered": 3, 00:12:51.678 "num_base_bdevs_operational": 4, 00:12:51.678 "base_bdevs_list": [ 00:12:51.678 { 00:12:51.678 "name": "BaseBdev1", 00:12:51.678 "uuid": "b40c3323-dd3a-4049-8e5a-cf69d995ca26", 00:12:51.678 "is_configured": true, 00:12:51.678 "data_offset": 2048, 00:12:51.678 "data_size": 63488 00:12:51.678 }, 00:12:51.678 { 00:12:51.678 "name": null, 00:12:51.678 "uuid": "84cef970-bcf0-4fb3-bcc6-f206ee256309", 00:12:51.678 "is_configured": false, 00:12:51.678 "data_offset": 0, 00:12:51.678 "data_size": 63488 00:12:51.678 }, 00:12:51.678 { 00:12:51.678 "name": "BaseBdev3", 00:12:51.678 "uuid": "7b339006-44ff-405f-acd0-d9f42ec82775", 00:12:51.678 "is_configured": true, 00:12:51.678 "data_offset": 2048, 00:12:51.678 "data_size": 63488 00:12:51.678 }, 00:12:51.678 { 00:12:51.678 "name": "BaseBdev4", 00:12:51.678 "uuid": "f2b5d771-af41-44b7-b0db-83eab26d875d", 00:12:51.678 "is_configured": true, 00:12:51.678 "data_offset": 2048, 00:12:51.678 "data_size": 63488 00:12:51.678 } 00:12:51.678 ] 00:12:51.678 }' 00:12:51.678 19:11:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:51.678 19:11:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.247 19:11:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:52.247 19:11:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:52.247 19:11:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.247 19:11:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.247 19:11:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.247 19:11:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:12:52.247 19:11:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:52.247 19:11:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.247 19:11:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.247 [2024-11-27 19:11:01.720269] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:52.247 19:11:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.247 19:11:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:52.247 19:11:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:52.247 19:11:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:52.247 19:11:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:52.247 19:11:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:52.247 19:11:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:52.247 19:11:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:52.247 19:11:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:52.247 19:11:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:52.247 19:11:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:52.247 19:11:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:52.247 19:11:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:52.247 19:11:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.247 19:11:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.247 19:11:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.247 19:11:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:52.247 "name": "Existed_Raid", 00:12:52.247 "uuid": "55182ecd-f8ea-44e4-ae10-902f4c7ec980", 00:12:52.247 "strip_size_kb": 0, 00:12:52.247 "state": "configuring", 00:12:52.247 "raid_level": "raid1", 00:12:52.247 "superblock": true, 00:12:52.247 "num_base_bdevs": 4, 00:12:52.247 "num_base_bdevs_discovered": 2, 00:12:52.247 "num_base_bdevs_operational": 4, 00:12:52.247 "base_bdevs_list": [ 00:12:52.247 { 00:12:52.247 "name": null, 00:12:52.247 "uuid": "b40c3323-dd3a-4049-8e5a-cf69d995ca26", 00:12:52.247 "is_configured": false, 00:12:52.247 "data_offset": 0, 00:12:52.247 "data_size": 63488 00:12:52.247 }, 00:12:52.247 { 00:12:52.247 "name": null, 00:12:52.247 "uuid": "84cef970-bcf0-4fb3-bcc6-f206ee256309", 00:12:52.247 "is_configured": false, 00:12:52.247 "data_offset": 0, 00:12:52.247 "data_size": 63488 00:12:52.247 }, 00:12:52.247 { 00:12:52.247 "name": "BaseBdev3", 00:12:52.247 "uuid": "7b339006-44ff-405f-acd0-d9f42ec82775", 00:12:52.247 "is_configured": true, 00:12:52.247 "data_offset": 2048, 00:12:52.247 "data_size": 63488 00:12:52.247 }, 00:12:52.247 { 00:12:52.247 "name": "BaseBdev4", 00:12:52.247 "uuid": "f2b5d771-af41-44b7-b0db-83eab26d875d", 00:12:52.247 "is_configured": true, 00:12:52.247 "data_offset": 2048, 00:12:52.247 "data_size": 63488 00:12:52.247 } 00:12:52.247 ] 00:12:52.247 }' 00:12:52.247 19:11:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:52.247 19:11:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.817 19:11:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:52.817 19:11:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:52.817 19:11:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.817 19:11:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.817 19:11:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.817 19:11:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:12:52.817 19:11:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:52.817 19:11:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.817 19:11:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.817 [2024-11-27 19:11:02.316133] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:52.817 19:11:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.817 19:11:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:52.817 19:11:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:52.817 19:11:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:52.817 19:11:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:52.817 19:11:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:52.817 19:11:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:52.817 19:11:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:52.817 19:11:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:52.817 19:11:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:52.817 19:11:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:52.817 19:11:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:52.817 19:11:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.817 19:11:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.817 19:11:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:52.817 19:11:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.817 19:11:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:52.817 "name": "Existed_Raid", 00:12:52.817 "uuid": "55182ecd-f8ea-44e4-ae10-902f4c7ec980", 00:12:52.817 "strip_size_kb": 0, 00:12:52.817 "state": "configuring", 00:12:52.817 "raid_level": "raid1", 00:12:52.817 "superblock": true, 00:12:52.817 "num_base_bdevs": 4, 00:12:52.817 "num_base_bdevs_discovered": 3, 00:12:52.817 "num_base_bdevs_operational": 4, 00:12:52.817 "base_bdevs_list": [ 00:12:52.817 { 00:12:52.817 "name": null, 00:12:52.817 "uuid": "b40c3323-dd3a-4049-8e5a-cf69d995ca26", 00:12:52.817 "is_configured": false, 00:12:52.817 "data_offset": 0, 00:12:52.817 "data_size": 63488 00:12:52.817 }, 00:12:52.817 { 00:12:52.817 "name": "BaseBdev2", 00:12:52.817 "uuid": "84cef970-bcf0-4fb3-bcc6-f206ee256309", 00:12:52.817 "is_configured": true, 00:12:52.817 "data_offset": 2048, 00:12:52.817 "data_size": 63488 00:12:52.817 }, 00:12:52.817 { 00:12:52.817 "name": "BaseBdev3", 00:12:52.817 "uuid": "7b339006-44ff-405f-acd0-d9f42ec82775", 00:12:52.817 "is_configured": true, 00:12:52.817 "data_offset": 2048, 00:12:52.817 "data_size": 63488 00:12:52.817 }, 00:12:52.817 { 00:12:52.817 "name": "BaseBdev4", 00:12:52.817 "uuid": "f2b5d771-af41-44b7-b0db-83eab26d875d", 00:12:52.817 "is_configured": true, 00:12:52.817 "data_offset": 2048, 00:12:52.817 "data_size": 63488 00:12:52.817 } 00:12:52.817 ] 00:12:52.817 }' 00:12:52.817 19:11:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:52.817 19:11:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.077 19:11:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:53.077 19:11:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:53.077 19:11:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.077 19:11:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.337 19:11:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.337 19:11:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:12:53.337 19:11:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:53.337 19:11:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:53.337 19:11:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.337 19:11:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.337 19:11:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.337 19:11:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u b40c3323-dd3a-4049-8e5a-cf69d995ca26 00:12:53.337 19:11:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.337 19:11:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.337 [2024-11-27 19:11:02.820593] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:53.337 [2024-11-27 19:11:02.820949] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:53.337 [2024-11-27 19:11:02.821015] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:53.337 [2024-11-27 19:11:02.821327] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:12:53.337 [2024-11-27 19:11:02.821548] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:53.337 [2024-11-27 19:11:02.821592] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raNewBaseBdev 00:12:53.337 id_bdev 0x617000008200 00:12:53.337 [2024-11-27 19:11:02.821790] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:53.337 19:11:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.337 19:11:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:12:53.337 19:11:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:12:53.337 19:11:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:53.337 19:11:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:53.337 19:11:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:53.337 19:11:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:53.337 19:11:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:53.337 19:11:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.337 19:11:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.337 19:11:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.337 19:11:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:53.337 19:11:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.337 19:11:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.337 [ 00:12:53.337 { 00:12:53.337 "name": "NewBaseBdev", 00:12:53.337 "aliases": [ 00:12:53.337 "b40c3323-dd3a-4049-8e5a-cf69d995ca26" 00:12:53.337 ], 00:12:53.338 "product_name": "Malloc disk", 00:12:53.338 "block_size": 512, 00:12:53.338 "num_blocks": 65536, 00:12:53.338 "uuid": "b40c3323-dd3a-4049-8e5a-cf69d995ca26", 00:12:53.338 "assigned_rate_limits": { 00:12:53.338 "rw_ios_per_sec": 0, 00:12:53.338 "rw_mbytes_per_sec": 0, 00:12:53.338 "r_mbytes_per_sec": 0, 00:12:53.338 "w_mbytes_per_sec": 0 00:12:53.338 }, 00:12:53.338 "claimed": true, 00:12:53.338 "claim_type": "exclusive_write", 00:12:53.338 "zoned": false, 00:12:53.338 "supported_io_types": { 00:12:53.338 "read": true, 00:12:53.338 "write": true, 00:12:53.338 "unmap": true, 00:12:53.338 "flush": true, 00:12:53.338 "reset": true, 00:12:53.338 "nvme_admin": false, 00:12:53.338 "nvme_io": false, 00:12:53.338 "nvme_io_md": false, 00:12:53.338 "write_zeroes": true, 00:12:53.338 "zcopy": true, 00:12:53.338 "get_zone_info": false, 00:12:53.338 "zone_management": false, 00:12:53.338 "zone_append": false, 00:12:53.338 "compare": false, 00:12:53.338 "compare_and_write": false, 00:12:53.338 "abort": true, 00:12:53.338 "seek_hole": false, 00:12:53.338 "seek_data": false, 00:12:53.338 "copy": true, 00:12:53.338 "nvme_iov_md": false 00:12:53.338 }, 00:12:53.338 "memory_domains": [ 00:12:53.338 { 00:12:53.338 "dma_device_id": "system", 00:12:53.338 "dma_device_type": 1 00:12:53.338 }, 00:12:53.338 { 00:12:53.338 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:53.338 "dma_device_type": 2 00:12:53.338 } 00:12:53.338 ], 00:12:53.338 "driver_specific": {} 00:12:53.338 } 00:12:53.338 ] 00:12:53.338 19:11:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.338 19:11:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:53.338 19:11:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:12:53.338 19:11:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:53.338 19:11:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:53.338 19:11:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:53.338 19:11:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:53.338 19:11:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:53.338 19:11:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:53.338 19:11:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:53.338 19:11:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:53.338 19:11:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:53.338 19:11:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:53.338 19:11:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:53.338 19:11:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.338 19:11:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.338 19:11:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.338 19:11:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:53.338 "name": "Existed_Raid", 00:12:53.338 "uuid": "55182ecd-f8ea-44e4-ae10-902f4c7ec980", 00:12:53.338 "strip_size_kb": 0, 00:12:53.338 "state": "online", 00:12:53.338 "raid_level": "raid1", 00:12:53.338 "superblock": true, 00:12:53.338 "num_base_bdevs": 4, 00:12:53.338 "num_base_bdevs_discovered": 4, 00:12:53.338 "num_base_bdevs_operational": 4, 00:12:53.338 "base_bdevs_list": [ 00:12:53.338 { 00:12:53.338 "name": "NewBaseBdev", 00:12:53.338 "uuid": "b40c3323-dd3a-4049-8e5a-cf69d995ca26", 00:12:53.338 "is_configured": true, 00:12:53.338 "data_offset": 2048, 00:12:53.338 "data_size": 63488 00:12:53.338 }, 00:12:53.338 { 00:12:53.338 "name": "BaseBdev2", 00:12:53.338 "uuid": "84cef970-bcf0-4fb3-bcc6-f206ee256309", 00:12:53.338 "is_configured": true, 00:12:53.338 "data_offset": 2048, 00:12:53.338 "data_size": 63488 00:12:53.338 }, 00:12:53.338 { 00:12:53.338 "name": "BaseBdev3", 00:12:53.338 "uuid": "7b339006-44ff-405f-acd0-d9f42ec82775", 00:12:53.338 "is_configured": true, 00:12:53.338 "data_offset": 2048, 00:12:53.338 "data_size": 63488 00:12:53.338 }, 00:12:53.338 { 00:12:53.338 "name": "BaseBdev4", 00:12:53.338 "uuid": "f2b5d771-af41-44b7-b0db-83eab26d875d", 00:12:53.338 "is_configured": true, 00:12:53.338 "data_offset": 2048, 00:12:53.338 "data_size": 63488 00:12:53.338 } 00:12:53.338 ] 00:12:53.338 }' 00:12:53.338 19:11:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:53.338 19:11:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.908 19:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:12:53.908 19:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:53.908 19:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:53.908 19:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:53.908 19:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:12:53.908 19:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:53.908 19:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:53.908 19:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:53.908 19:11:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.908 19:11:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.908 [2024-11-27 19:11:03.360165] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:53.908 19:11:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.908 19:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:53.908 "name": "Existed_Raid", 00:12:53.908 "aliases": [ 00:12:53.908 "55182ecd-f8ea-44e4-ae10-902f4c7ec980" 00:12:53.908 ], 00:12:53.908 "product_name": "Raid Volume", 00:12:53.908 "block_size": 512, 00:12:53.909 "num_blocks": 63488, 00:12:53.909 "uuid": "55182ecd-f8ea-44e4-ae10-902f4c7ec980", 00:12:53.909 "assigned_rate_limits": { 00:12:53.909 "rw_ios_per_sec": 0, 00:12:53.909 "rw_mbytes_per_sec": 0, 00:12:53.909 "r_mbytes_per_sec": 0, 00:12:53.909 "w_mbytes_per_sec": 0 00:12:53.909 }, 00:12:53.909 "claimed": false, 00:12:53.909 "zoned": false, 00:12:53.909 "supported_io_types": { 00:12:53.909 "read": true, 00:12:53.909 "write": true, 00:12:53.909 "unmap": false, 00:12:53.909 "flush": false, 00:12:53.909 "reset": true, 00:12:53.909 "nvme_admin": false, 00:12:53.909 "nvme_io": false, 00:12:53.909 "nvme_io_md": false, 00:12:53.909 "write_zeroes": true, 00:12:53.909 "zcopy": false, 00:12:53.909 "get_zone_info": false, 00:12:53.909 "zone_management": false, 00:12:53.909 "zone_append": false, 00:12:53.909 "compare": false, 00:12:53.909 "compare_and_write": false, 00:12:53.909 "abort": false, 00:12:53.909 "seek_hole": false, 00:12:53.909 "seek_data": false, 00:12:53.909 "copy": false, 00:12:53.909 "nvme_iov_md": false 00:12:53.909 }, 00:12:53.909 "memory_domains": [ 00:12:53.909 { 00:12:53.909 "dma_device_id": "system", 00:12:53.909 "dma_device_type": 1 00:12:53.909 }, 00:12:53.909 { 00:12:53.909 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:53.909 "dma_device_type": 2 00:12:53.909 }, 00:12:53.909 { 00:12:53.909 "dma_device_id": "system", 00:12:53.909 "dma_device_type": 1 00:12:53.909 }, 00:12:53.909 { 00:12:53.909 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:53.909 "dma_device_type": 2 00:12:53.909 }, 00:12:53.909 { 00:12:53.909 "dma_device_id": "system", 00:12:53.909 "dma_device_type": 1 00:12:53.909 }, 00:12:53.909 { 00:12:53.909 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:53.909 "dma_device_type": 2 00:12:53.909 }, 00:12:53.909 { 00:12:53.909 "dma_device_id": "system", 00:12:53.909 "dma_device_type": 1 00:12:53.909 }, 00:12:53.909 { 00:12:53.909 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:53.909 "dma_device_type": 2 00:12:53.909 } 00:12:53.909 ], 00:12:53.909 "driver_specific": { 00:12:53.909 "raid": { 00:12:53.909 "uuid": "55182ecd-f8ea-44e4-ae10-902f4c7ec980", 00:12:53.909 "strip_size_kb": 0, 00:12:53.909 "state": "online", 00:12:53.909 "raid_level": "raid1", 00:12:53.909 "superblock": true, 00:12:53.909 "num_base_bdevs": 4, 00:12:53.909 "num_base_bdevs_discovered": 4, 00:12:53.909 "num_base_bdevs_operational": 4, 00:12:53.909 "base_bdevs_list": [ 00:12:53.909 { 00:12:53.909 "name": "NewBaseBdev", 00:12:53.909 "uuid": "b40c3323-dd3a-4049-8e5a-cf69d995ca26", 00:12:53.909 "is_configured": true, 00:12:53.909 "data_offset": 2048, 00:12:53.909 "data_size": 63488 00:12:53.909 }, 00:12:53.909 { 00:12:53.909 "name": "BaseBdev2", 00:12:53.909 "uuid": "84cef970-bcf0-4fb3-bcc6-f206ee256309", 00:12:53.909 "is_configured": true, 00:12:53.909 "data_offset": 2048, 00:12:53.909 "data_size": 63488 00:12:53.909 }, 00:12:53.909 { 00:12:53.909 "name": "BaseBdev3", 00:12:53.909 "uuid": "7b339006-44ff-405f-acd0-d9f42ec82775", 00:12:53.909 "is_configured": true, 00:12:53.909 "data_offset": 2048, 00:12:53.909 "data_size": 63488 00:12:53.909 }, 00:12:53.909 { 00:12:53.909 "name": "BaseBdev4", 00:12:53.909 "uuid": "f2b5d771-af41-44b7-b0db-83eab26d875d", 00:12:53.909 "is_configured": true, 00:12:53.909 "data_offset": 2048, 00:12:53.909 "data_size": 63488 00:12:53.909 } 00:12:53.909 ] 00:12:53.909 } 00:12:53.909 } 00:12:53.909 }' 00:12:53.909 19:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:53.909 19:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:12:53.909 BaseBdev2 00:12:53.909 BaseBdev3 00:12:53.909 BaseBdev4' 00:12:53.909 19:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:53.909 19:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:53.909 19:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:53.909 19:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:12:53.909 19:11:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.909 19:11:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.909 19:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:53.909 19:11:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.169 19:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:54.169 19:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:54.169 19:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:54.169 19:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:54.169 19:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:54.169 19:11:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.169 19:11:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:54.169 19:11:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.169 19:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:54.169 19:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:54.169 19:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:54.169 19:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:54.169 19:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:54.169 19:11:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.169 19:11:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:54.169 19:11:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.169 19:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:54.169 19:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:54.169 19:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:54.170 19:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:54.170 19:11:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.170 19:11:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:54.170 19:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:54.170 19:11:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.170 19:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:54.170 19:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:54.170 19:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:54.170 19:11:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.170 19:11:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:54.170 [2024-11-27 19:11:03.695122] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:54.170 [2024-11-27 19:11:03.695154] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:54.170 [2024-11-27 19:11:03.695236] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:54.170 [2024-11-27 19:11:03.695574] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:54.170 [2024-11-27 19:11:03.695589] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:12:54.170 19:11:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.170 19:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 73961 00:12:54.170 19:11:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 73961 ']' 00:12:54.170 19:11:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 73961 00:12:54.170 19:11:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:12:54.170 19:11:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:54.170 19:11:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73961 00:12:54.170 19:11:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:54.170 19:11:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:54.170 killing process with pid 73961 00:12:54.170 19:11:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73961' 00:12:54.170 19:11:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 73961 00:12:54.170 [2024-11-27 19:11:03.743239] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:54.170 19:11:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 73961 00:12:54.739 [2024-11-27 19:11:04.172927] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:56.120 19:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:12:56.120 00:12:56.120 real 0m11.861s 00:12:56.120 user 0m18.490s 00:12:56.120 sys 0m2.308s 00:12:56.120 19:11:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:56.120 ************************************ 00:12:56.120 END TEST raid_state_function_test_sb 00:12:56.120 ************************************ 00:12:56.120 19:11:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.120 19:11:05 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:12:56.120 19:11:05 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:56.120 19:11:05 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:56.120 19:11:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:56.120 ************************************ 00:12:56.120 START TEST raid_superblock_test 00:12:56.120 ************************************ 00:12:56.120 19:11:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 4 00:12:56.120 19:11:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:12:56.120 19:11:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:12:56.120 19:11:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:12:56.120 19:11:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:12:56.120 19:11:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:12:56.120 19:11:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:12:56.120 19:11:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:12:56.120 19:11:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:12:56.120 19:11:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:12:56.120 19:11:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:12:56.121 19:11:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:12:56.121 19:11:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:12:56.121 19:11:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:12:56.121 19:11:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:12:56.121 19:11:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:12:56.121 19:11:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=74632 00:12:56.121 19:11:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:12:56.121 19:11:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 74632 00:12:56.121 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:56.121 19:11:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 74632 ']' 00:12:56.121 19:11:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:56.121 19:11:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:56.121 19:11:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:56.121 19:11:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:56.121 19:11:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.121 [2024-11-27 19:11:05.587736] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:12:56.121 [2024-11-27 19:11:05.587879] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74632 ] 00:12:56.381 [2024-11-27 19:11:05.768499] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:56.381 [2024-11-27 19:11:05.909740] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:56.641 [2024-11-27 19:11:06.142220] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:56.641 [2024-11-27 19:11:06.142261] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:56.902 19:11:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:56.902 19:11:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:12:56.902 19:11:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:12:56.902 19:11:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:56.902 19:11:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:12:56.902 19:11:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:12:56.902 19:11:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:12:56.902 19:11:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:56.902 19:11:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:56.902 19:11:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:56.902 19:11:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:12:56.902 19:11:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.902 19:11:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.902 malloc1 00:12:56.902 19:11:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.902 19:11:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:56.902 19:11:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.902 19:11:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.902 [2024-11-27 19:11:06.487639] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:56.902 [2024-11-27 19:11:06.487771] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:56.902 [2024-11-27 19:11:06.487815] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:56.902 [2024-11-27 19:11:06.487888] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:56.902 [2024-11-27 19:11:06.490376] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:56.902 [2024-11-27 19:11:06.490452] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:56.902 pt1 00:12:56.902 19:11:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.902 19:11:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:56.902 19:11:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:56.902 19:11:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:12:56.902 19:11:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:12:56.902 19:11:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:12:56.902 19:11:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:56.902 19:11:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:56.902 19:11:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:56.902 19:11:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:12:56.902 19:11:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.902 19:11:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.163 malloc2 00:12:57.163 19:11:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.163 19:11:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:57.163 19:11:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.163 19:11:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.163 [2024-11-27 19:11:06.553431] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:57.163 [2024-11-27 19:11:06.553500] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:57.163 [2024-11-27 19:11:06.553530] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:57.163 [2024-11-27 19:11:06.553540] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:57.163 [2024-11-27 19:11:06.556029] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:57.163 [2024-11-27 19:11:06.556067] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:57.163 pt2 00:12:57.163 19:11:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.163 19:11:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:57.163 19:11:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:57.163 19:11:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:12:57.163 19:11:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:12:57.163 19:11:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:12:57.163 19:11:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:57.163 19:11:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:57.163 19:11:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:57.163 19:11:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:12:57.163 19:11:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.163 19:11:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.163 malloc3 00:12:57.163 19:11:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.163 19:11:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:57.163 19:11:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.163 19:11:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.163 [2024-11-27 19:11:06.629290] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:57.163 [2024-11-27 19:11:06.629418] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:57.163 [2024-11-27 19:11:06.629463] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:57.163 [2024-11-27 19:11:06.629496] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:57.163 [2024-11-27 19:11:06.632033] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:57.163 [2024-11-27 19:11:06.632107] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:57.163 pt3 00:12:57.163 19:11:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.163 19:11:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:57.163 19:11:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:57.163 19:11:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:12:57.163 19:11:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:12:57.163 19:11:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:12:57.163 19:11:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:57.163 19:11:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:57.163 19:11:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:57.163 19:11:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:12:57.163 19:11:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.163 19:11:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.163 malloc4 00:12:57.163 19:11:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.163 19:11:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:57.163 19:11:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.163 19:11:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.163 [2024-11-27 19:11:06.697558] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:57.163 [2024-11-27 19:11:06.697686] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:57.163 [2024-11-27 19:11:06.697741] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:12:57.163 [2024-11-27 19:11:06.697775] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:57.163 [2024-11-27 19:11:06.700265] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:57.163 [2024-11-27 19:11:06.700338] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:57.163 pt4 00:12:57.164 19:11:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.164 19:11:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:57.164 19:11:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:57.164 19:11:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:12:57.164 19:11:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.164 19:11:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.164 [2024-11-27 19:11:06.709568] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:57.164 [2024-11-27 19:11:06.711757] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:57.164 [2024-11-27 19:11:06.711871] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:57.164 [2024-11-27 19:11:06.711960] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:57.164 [2024-11-27 19:11:06.712226] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:57.164 [2024-11-27 19:11:06.712280] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:57.164 [2024-11-27 19:11:06.712587] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:57.164 [2024-11-27 19:11:06.712907] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:57.164 [2024-11-27 19:11:06.712965] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:57.164 [2024-11-27 19:11:06.713186] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:57.164 19:11:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.164 19:11:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:57.164 19:11:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:57.164 19:11:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:57.164 19:11:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:57.164 19:11:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:57.164 19:11:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:57.164 19:11:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:57.164 19:11:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:57.164 19:11:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:57.164 19:11:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:57.164 19:11:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:57.164 19:11:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:57.164 19:11:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.164 19:11:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.164 19:11:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.164 19:11:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:57.164 "name": "raid_bdev1", 00:12:57.164 "uuid": "c434608c-070f-4051-8a2b-b23a0f33d15a", 00:12:57.164 "strip_size_kb": 0, 00:12:57.164 "state": "online", 00:12:57.164 "raid_level": "raid1", 00:12:57.164 "superblock": true, 00:12:57.164 "num_base_bdevs": 4, 00:12:57.164 "num_base_bdevs_discovered": 4, 00:12:57.164 "num_base_bdevs_operational": 4, 00:12:57.164 "base_bdevs_list": [ 00:12:57.164 { 00:12:57.164 "name": "pt1", 00:12:57.164 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:57.164 "is_configured": true, 00:12:57.164 "data_offset": 2048, 00:12:57.164 "data_size": 63488 00:12:57.164 }, 00:12:57.164 { 00:12:57.164 "name": "pt2", 00:12:57.164 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:57.164 "is_configured": true, 00:12:57.164 "data_offset": 2048, 00:12:57.164 "data_size": 63488 00:12:57.164 }, 00:12:57.164 { 00:12:57.164 "name": "pt3", 00:12:57.164 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:57.164 "is_configured": true, 00:12:57.164 "data_offset": 2048, 00:12:57.164 "data_size": 63488 00:12:57.164 }, 00:12:57.164 { 00:12:57.164 "name": "pt4", 00:12:57.164 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:57.164 "is_configured": true, 00:12:57.164 "data_offset": 2048, 00:12:57.164 "data_size": 63488 00:12:57.164 } 00:12:57.164 ] 00:12:57.164 }' 00:12:57.164 19:11:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:57.164 19:11:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.734 19:11:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:12:57.734 19:11:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:57.734 19:11:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:57.734 19:11:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:57.735 19:11:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:57.735 19:11:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:57.735 19:11:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:57.735 19:11:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:57.735 19:11:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.735 19:11:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.735 [2024-11-27 19:11:07.173130] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:57.735 19:11:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.735 19:11:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:57.735 "name": "raid_bdev1", 00:12:57.735 "aliases": [ 00:12:57.735 "c434608c-070f-4051-8a2b-b23a0f33d15a" 00:12:57.735 ], 00:12:57.735 "product_name": "Raid Volume", 00:12:57.735 "block_size": 512, 00:12:57.735 "num_blocks": 63488, 00:12:57.735 "uuid": "c434608c-070f-4051-8a2b-b23a0f33d15a", 00:12:57.735 "assigned_rate_limits": { 00:12:57.735 "rw_ios_per_sec": 0, 00:12:57.735 "rw_mbytes_per_sec": 0, 00:12:57.735 "r_mbytes_per_sec": 0, 00:12:57.735 "w_mbytes_per_sec": 0 00:12:57.735 }, 00:12:57.735 "claimed": false, 00:12:57.735 "zoned": false, 00:12:57.735 "supported_io_types": { 00:12:57.735 "read": true, 00:12:57.735 "write": true, 00:12:57.735 "unmap": false, 00:12:57.735 "flush": false, 00:12:57.735 "reset": true, 00:12:57.735 "nvme_admin": false, 00:12:57.735 "nvme_io": false, 00:12:57.735 "nvme_io_md": false, 00:12:57.735 "write_zeroes": true, 00:12:57.735 "zcopy": false, 00:12:57.735 "get_zone_info": false, 00:12:57.735 "zone_management": false, 00:12:57.735 "zone_append": false, 00:12:57.735 "compare": false, 00:12:57.735 "compare_and_write": false, 00:12:57.735 "abort": false, 00:12:57.735 "seek_hole": false, 00:12:57.735 "seek_data": false, 00:12:57.735 "copy": false, 00:12:57.735 "nvme_iov_md": false 00:12:57.735 }, 00:12:57.735 "memory_domains": [ 00:12:57.735 { 00:12:57.735 "dma_device_id": "system", 00:12:57.735 "dma_device_type": 1 00:12:57.735 }, 00:12:57.735 { 00:12:57.735 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:57.735 "dma_device_type": 2 00:12:57.735 }, 00:12:57.735 { 00:12:57.735 "dma_device_id": "system", 00:12:57.735 "dma_device_type": 1 00:12:57.735 }, 00:12:57.735 { 00:12:57.735 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:57.735 "dma_device_type": 2 00:12:57.735 }, 00:12:57.735 { 00:12:57.735 "dma_device_id": "system", 00:12:57.735 "dma_device_type": 1 00:12:57.735 }, 00:12:57.735 { 00:12:57.735 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:57.735 "dma_device_type": 2 00:12:57.735 }, 00:12:57.735 { 00:12:57.735 "dma_device_id": "system", 00:12:57.735 "dma_device_type": 1 00:12:57.735 }, 00:12:57.735 { 00:12:57.735 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:57.735 "dma_device_type": 2 00:12:57.735 } 00:12:57.735 ], 00:12:57.735 "driver_specific": { 00:12:57.735 "raid": { 00:12:57.735 "uuid": "c434608c-070f-4051-8a2b-b23a0f33d15a", 00:12:57.735 "strip_size_kb": 0, 00:12:57.735 "state": "online", 00:12:57.735 "raid_level": "raid1", 00:12:57.735 "superblock": true, 00:12:57.735 "num_base_bdevs": 4, 00:12:57.735 "num_base_bdevs_discovered": 4, 00:12:57.735 "num_base_bdevs_operational": 4, 00:12:57.735 "base_bdevs_list": [ 00:12:57.735 { 00:12:57.735 "name": "pt1", 00:12:57.735 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:57.735 "is_configured": true, 00:12:57.735 "data_offset": 2048, 00:12:57.735 "data_size": 63488 00:12:57.735 }, 00:12:57.735 { 00:12:57.735 "name": "pt2", 00:12:57.735 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:57.735 "is_configured": true, 00:12:57.735 "data_offset": 2048, 00:12:57.735 "data_size": 63488 00:12:57.735 }, 00:12:57.735 { 00:12:57.735 "name": "pt3", 00:12:57.735 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:57.735 "is_configured": true, 00:12:57.735 "data_offset": 2048, 00:12:57.735 "data_size": 63488 00:12:57.735 }, 00:12:57.735 { 00:12:57.735 "name": "pt4", 00:12:57.735 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:57.735 "is_configured": true, 00:12:57.735 "data_offset": 2048, 00:12:57.735 "data_size": 63488 00:12:57.735 } 00:12:57.735 ] 00:12:57.735 } 00:12:57.735 } 00:12:57.735 }' 00:12:57.735 19:11:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:57.735 19:11:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:57.735 pt2 00:12:57.735 pt3 00:12:57.735 pt4' 00:12:57.735 19:11:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:57.735 19:11:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:57.735 19:11:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:57.735 19:11:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:57.735 19:11:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:57.735 19:11:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.735 19:11:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.735 19:11:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.735 19:11:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:57.735 19:11:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:57.735 19:11:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:57.735 19:11:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:57.735 19:11:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:57.735 19:11:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.735 19:11:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.735 19:11:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.996 19:11:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:57.996 19:11:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:57.996 19:11:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:57.996 19:11:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:12:57.996 19:11:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:57.996 19:11:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.996 19:11:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.996 19:11:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.996 19:11:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:57.996 19:11:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:57.996 19:11:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:57.996 19:11:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:12:57.996 19:11:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.996 19:11:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:57.996 19:11:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.996 19:11:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.996 19:11:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:57.996 19:11:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:57.996 19:11:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:12:57.996 19:11:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:57.996 19:11:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.996 19:11:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.996 [2024-11-27 19:11:07.504497] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:57.996 19:11:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.996 19:11:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=c434608c-070f-4051-8a2b-b23a0f33d15a 00:12:57.996 19:11:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z c434608c-070f-4051-8a2b-b23a0f33d15a ']' 00:12:57.996 19:11:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:57.996 19:11:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.996 19:11:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.996 [2024-11-27 19:11:07.532095] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:57.996 [2024-11-27 19:11:07.532121] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:57.996 [2024-11-27 19:11:07.532214] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:57.996 [2024-11-27 19:11:07.532309] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:57.996 [2024-11-27 19:11:07.532325] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:57.996 19:11:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.996 19:11:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:12:57.996 19:11:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:57.996 19:11:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.996 19:11:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.996 19:11:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.996 19:11:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:12:57.996 19:11:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:12:57.996 19:11:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:57.996 19:11:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:12:57.996 19:11:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.996 19:11:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.996 19:11:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.996 19:11:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:57.996 19:11:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:12:57.996 19:11:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.996 19:11:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.996 19:11:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.996 19:11:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:57.996 19:11:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:12:57.996 19:11:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.996 19:11:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.996 19:11:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.996 19:11:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:57.996 19:11:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:12:57.996 19:11:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.996 19:11:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.257 19:11:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.257 19:11:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:12:58.257 19:11:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:12:58.257 19:11:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.257 19:11:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.258 19:11:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.258 19:11:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:12:58.258 19:11:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:58.258 19:11:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:12:58.258 19:11:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:58.258 19:11:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:12:58.258 19:11:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:58.258 19:11:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:12:58.258 19:11:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:58.258 19:11:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:58.258 19:11:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.258 19:11:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.258 [2024-11-27 19:11:07.695843] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:12:58.258 [2024-11-27 19:11:07.698107] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:12:58.258 [2024-11-27 19:11:07.698224] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:12:58.258 [2024-11-27 19:11:07.698280] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:12:58.258 [2024-11-27 19:11:07.698363] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:12:58.258 [2024-11-27 19:11:07.698457] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:12:58.258 [2024-11-27 19:11:07.698521] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:12:58.258 [2024-11-27 19:11:07.698565] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:12:58.258 [2024-11-27 19:11:07.698581] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:58.258 [2024-11-27 19:11:07.698592] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:12:58.258 request: 00:12:58.258 { 00:12:58.258 "name": "raid_bdev1", 00:12:58.258 "raid_level": "raid1", 00:12:58.258 "base_bdevs": [ 00:12:58.258 "malloc1", 00:12:58.258 "malloc2", 00:12:58.258 "malloc3", 00:12:58.258 "malloc4" 00:12:58.258 ], 00:12:58.258 "superblock": false, 00:12:58.258 "method": "bdev_raid_create", 00:12:58.258 "req_id": 1 00:12:58.258 } 00:12:58.258 Got JSON-RPC error response 00:12:58.258 response: 00:12:58.258 { 00:12:58.258 "code": -17, 00:12:58.258 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:12:58.258 } 00:12:58.258 19:11:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:12:58.258 19:11:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:12:58.258 19:11:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:58.258 19:11:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:58.258 19:11:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:58.258 19:11:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:58.258 19:11:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.258 19:11:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.258 19:11:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:12:58.258 19:11:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.258 19:11:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:12:58.258 19:11:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:12:58.258 19:11:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:58.258 19:11:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.258 19:11:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.258 [2024-11-27 19:11:07.759705] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:58.258 [2024-11-27 19:11:07.759763] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:58.258 [2024-11-27 19:11:07.759780] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:58.258 [2024-11-27 19:11:07.759793] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:58.258 [2024-11-27 19:11:07.762475] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:58.258 [2024-11-27 19:11:07.762517] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:58.258 [2024-11-27 19:11:07.762606] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:12:58.258 [2024-11-27 19:11:07.762672] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:58.258 pt1 00:12:58.258 19:11:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.258 19:11:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:12:58.258 19:11:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:58.258 19:11:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:58.258 19:11:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:58.258 19:11:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:58.258 19:11:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:58.258 19:11:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:58.258 19:11:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:58.258 19:11:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:58.258 19:11:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:58.258 19:11:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:58.258 19:11:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:58.258 19:11:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.258 19:11:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.258 19:11:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.258 19:11:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:58.258 "name": "raid_bdev1", 00:12:58.258 "uuid": "c434608c-070f-4051-8a2b-b23a0f33d15a", 00:12:58.258 "strip_size_kb": 0, 00:12:58.258 "state": "configuring", 00:12:58.258 "raid_level": "raid1", 00:12:58.258 "superblock": true, 00:12:58.258 "num_base_bdevs": 4, 00:12:58.258 "num_base_bdevs_discovered": 1, 00:12:58.258 "num_base_bdevs_operational": 4, 00:12:58.258 "base_bdevs_list": [ 00:12:58.258 { 00:12:58.258 "name": "pt1", 00:12:58.258 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:58.258 "is_configured": true, 00:12:58.258 "data_offset": 2048, 00:12:58.258 "data_size": 63488 00:12:58.258 }, 00:12:58.258 { 00:12:58.258 "name": null, 00:12:58.258 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:58.258 "is_configured": false, 00:12:58.258 "data_offset": 2048, 00:12:58.258 "data_size": 63488 00:12:58.258 }, 00:12:58.258 { 00:12:58.258 "name": null, 00:12:58.258 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:58.258 "is_configured": false, 00:12:58.258 "data_offset": 2048, 00:12:58.258 "data_size": 63488 00:12:58.258 }, 00:12:58.258 { 00:12:58.258 "name": null, 00:12:58.258 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:58.258 "is_configured": false, 00:12:58.258 "data_offset": 2048, 00:12:58.258 "data_size": 63488 00:12:58.258 } 00:12:58.258 ] 00:12:58.258 }' 00:12:58.258 19:11:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:58.258 19:11:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.830 19:11:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:12:58.830 19:11:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:58.830 19:11:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.830 19:11:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.830 [2024-11-27 19:11:08.203008] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:58.830 [2024-11-27 19:11:08.203164] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:58.830 [2024-11-27 19:11:08.203208] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:12:58.830 [2024-11-27 19:11:08.203242] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:58.830 [2024-11-27 19:11:08.203834] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:58.830 [2024-11-27 19:11:08.203903] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:58.830 [2024-11-27 19:11:08.204045] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:58.830 [2024-11-27 19:11:08.204107] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:58.830 pt2 00:12:58.830 19:11:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.830 19:11:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:12:58.830 19:11:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.830 19:11:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.830 [2024-11-27 19:11:08.214944] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:12:58.830 19:11:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.830 19:11:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:12:58.830 19:11:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:58.830 19:11:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:58.830 19:11:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:58.830 19:11:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:58.830 19:11:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:58.830 19:11:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:58.830 19:11:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:58.830 19:11:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:58.830 19:11:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:58.830 19:11:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:58.830 19:11:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:58.830 19:11:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.830 19:11:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.830 19:11:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.830 19:11:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:58.830 "name": "raid_bdev1", 00:12:58.830 "uuid": "c434608c-070f-4051-8a2b-b23a0f33d15a", 00:12:58.830 "strip_size_kb": 0, 00:12:58.830 "state": "configuring", 00:12:58.830 "raid_level": "raid1", 00:12:58.830 "superblock": true, 00:12:58.830 "num_base_bdevs": 4, 00:12:58.830 "num_base_bdevs_discovered": 1, 00:12:58.830 "num_base_bdevs_operational": 4, 00:12:58.830 "base_bdevs_list": [ 00:12:58.830 { 00:12:58.830 "name": "pt1", 00:12:58.830 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:58.830 "is_configured": true, 00:12:58.830 "data_offset": 2048, 00:12:58.830 "data_size": 63488 00:12:58.830 }, 00:12:58.830 { 00:12:58.830 "name": null, 00:12:58.830 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:58.830 "is_configured": false, 00:12:58.830 "data_offset": 0, 00:12:58.830 "data_size": 63488 00:12:58.830 }, 00:12:58.830 { 00:12:58.830 "name": null, 00:12:58.830 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:58.830 "is_configured": false, 00:12:58.830 "data_offset": 2048, 00:12:58.830 "data_size": 63488 00:12:58.830 }, 00:12:58.830 { 00:12:58.830 "name": null, 00:12:58.830 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:58.830 "is_configured": false, 00:12:58.830 "data_offset": 2048, 00:12:58.830 "data_size": 63488 00:12:58.830 } 00:12:58.830 ] 00:12:58.830 }' 00:12:58.830 19:11:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:58.830 19:11:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.097 19:11:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:12:59.097 19:11:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:59.097 19:11:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:59.097 19:11:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.097 19:11:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.097 [2024-11-27 19:11:08.710127] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:59.097 [2024-11-27 19:11:08.710222] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:59.097 [2024-11-27 19:11:08.710246] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:12:59.097 [2024-11-27 19:11:08.710255] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:59.097 [2024-11-27 19:11:08.710828] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:59.097 [2024-11-27 19:11:08.710850] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:59.097 [2024-11-27 19:11:08.710955] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:59.097 [2024-11-27 19:11:08.710978] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:59.097 pt2 00:12:59.097 19:11:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.097 19:11:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:59.097 19:11:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:59.097 19:11:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:59.097 19:11:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.097 19:11:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.098 [2024-11-27 19:11:08.722037] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:59.098 [2024-11-27 19:11:08.722090] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:59.098 [2024-11-27 19:11:08.722112] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:12:59.098 [2024-11-27 19:11:08.722121] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:59.098 [2024-11-27 19:11:08.722538] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:59.098 [2024-11-27 19:11:08.722554] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:59.098 [2024-11-27 19:11:08.722619] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:12:59.098 [2024-11-27 19:11:08.722637] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:59.368 pt3 00:12:59.368 19:11:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.368 19:11:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:59.368 19:11:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:59.368 19:11:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:59.368 19:11:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.368 19:11:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.368 [2024-11-27 19:11:08.733989] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:59.368 [2024-11-27 19:11:08.734034] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:59.368 [2024-11-27 19:11:08.734050] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:12:59.368 [2024-11-27 19:11:08.734058] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:59.368 [2024-11-27 19:11:08.734469] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:59.368 [2024-11-27 19:11:08.734484] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:59.368 [2024-11-27 19:11:08.734547] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:12:59.368 [2024-11-27 19:11:08.734572] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:59.368 [2024-11-27 19:11:08.734741] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:59.368 [2024-11-27 19:11:08.734751] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:59.368 [2024-11-27 19:11:08.735046] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:12:59.368 [2024-11-27 19:11:08.735222] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:59.368 [2024-11-27 19:11:08.735242] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:12:59.368 [2024-11-27 19:11:08.735399] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:59.368 pt4 00:12:59.368 19:11:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.368 19:11:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:59.368 19:11:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:59.368 19:11:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:59.368 19:11:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:59.368 19:11:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:59.368 19:11:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:59.368 19:11:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:59.368 19:11:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:59.368 19:11:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:59.368 19:11:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:59.368 19:11:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:59.368 19:11:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:59.368 19:11:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:59.368 19:11:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:59.368 19:11:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.368 19:11:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.368 19:11:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.368 19:11:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:59.368 "name": "raid_bdev1", 00:12:59.368 "uuid": "c434608c-070f-4051-8a2b-b23a0f33d15a", 00:12:59.368 "strip_size_kb": 0, 00:12:59.368 "state": "online", 00:12:59.368 "raid_level": "raid1", 00:12:59.368 "superblock": true, 00:12:59.368 "num_base_bdevs": 4, 00:12:59.368 "num_base_bdevs_discovered": 4, 00:12:59.368 "num_base_bdevs_operational": 4, 00:12:59.368 "base_bdevs_list": [ 00:12:59.368 { 00:12:59.368 "name": "pt1", 00:12:59.368 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:59.368 "is_configured": true, 00:12:59.369 "data_offset": 2048, 00:12:59.369 "data_size": 63488 00:12:59.369 }, 00:12:59.369 { 00:12:59.369 "name": "pt2", 00:12:59.369 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:59.369 "is_configured": true, 00:12:59.369 "data_offset": 2048, 00:12:59.369 "data_size": 63488 00:12:59.369 }, 00:12:59.369 { 00:12:59.369 "name": "pt3", 00:12:59.369 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:59.369 "is_configured": true, 00:12:59.369 "data_offset": 2048, 00:12:59.369 "data_size": 63488 00:12:59.369 }, 00:12:59.369 { 00:12:59.369 "name": "pt4", 00:12:59.369 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:59.369 "is_configured": true, 00:12:59.369 "data_offset": 2048, 00:12:59.369 "data_size": 63488 00:12:59.369 } 00:12:59.369 ] 00:12:59.369 }' 00:12:59.369 19:11:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:59.369 19:11:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.629 19:11:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:12:59.629 19:11:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:59.629 19:11:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:59.629 19:11:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:59.629 19:11:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:59.629 19:11:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:59.629 19:11:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:59.629 19:11:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.629 19:11:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.629 19:11:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:59.629 [2024-11-27 19:11:09.193669] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:59.629 19:11:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.629 19:11:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:59.629 "name": "raid_bdev1", 00:12:59.629 "aliases": [ 00:12:59.629 "c434608c-070f-4051-8a2b-b23a0f33d15a" 00:12:59.629 ], 00:12:59.629 "product_name": "Raid Volume", 00:12:59.629 "block_size": 512, 00:12:59.629 "num_blocks": 63488, 00:12:59.629 "uuid": "c434608c-070f-4051-8a2b-b23a0f33d15a", 00:12:59.629 "assigned_rate_limits": { 00:12:59.629 "rw_ios_per_sec": 0, 00:12:59.629 "rw_mbytes_per_sec": 0, 00:12:59.629 "r_mbytes_per_sec": 0, 00:12:59.629 "w_mbytes_per_sec": 0 00:12:59.629 }, 00:12:59.629 "claimed": false, 00:12:59.629 "zoned": false, 00:12:59.629 "supported_io_types": { 00:12:59.629 "read": true, 00:12:59.629 "write": true, 00:12:59.629 "unmap": false, 00:12:59.629 "flush": false, 00:12:59.629 "reset": true, 00:12:59.629 "nvme_admin": false, 00:12:59.629 "nvme_io": false, 00:12:59.629 "nvme_io_md": false, 00:12:59.629 "write_zeroes": true, 00:12:59.629 "zcopy": false, 00:12:59.629 "get_zone_info": false, 00:12:59.629 "zone_management": false, 00:12:59.629 "zone_append": false, 00:12:59.629 "compare": false, 00:12:59.629 "compare_and_write": false, 00:12:59.629 "abort": false, 00:12:59.629 "seek_hole": false, 00:12:59.629 "seek_data": false, 00:12:59.629 "copy": false, 00:12:59.629 "nvme_iov_md": false 00:12:59.629 }, 00:12:59.629 "memory_domains": [ 00:12:59.629 { 00:12:59.629 "dma_device_id": "system", 00:12:59.629 "dma_device_type": 1 00:12:59.629 }, 00:12:59.629 { 00:12:59.629 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:59.629 "dma_device_type": 2 00:12:59.629 }, 00:12:59.629 { 00:12:59.629 "dma_device_id": "system", 00:12:59.629 "dma_device_type": 1 00:12:59.629 }, 00:12:59.629 { 00:12:59.629 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:59.629 "dma_device_type": 2 00:12:59.629 }, 00:12:59.629 { 00:12:59.629 "dma_device_id": "system", 00:12:59.629 "dma_device_type": 1 00:12:59.629 }, 00:12:59.629 { 00:12:59.629 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:59.629 "dma_device_type": 2 00:12:59.629 }, 00:12:59.629 { 00:12:59.629 "dma_device_id": "system", 00:12:59.629 "dma_device_type": 1 00:12:59.629 }, 00:12:59.629 { 00:12:59.629 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:59.629 "dma_device_type": 2 00:12:59.629 } 00:12:59.629 ], 00:12:59.629 "driver_specific": { 00:12:59.629 "raid": { 00:12:59.630 "uuid": "c434608c-070f-4051-8a2b-b23a0f33d15a", 00:12:59.630 "strip_size_kb": 0, 00:12:59.630 "state": "online", 00:12:59.630 "raid_level": "raid1", 00:12:59.630 "superblock": true, 00:12:59.630 "num_base_bdevs": 4, 00:12:59.630 "num_base_bdevs_discovered": 4, 00:12:59.630 "num_base_bdevs_operational": 4, 00:12:59.630 "base_bdevs_list": [ 00:12:59.630 { 00:12:59.630 "name": "pt1", 00:12:59.630 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:59.630 "is_configured": true, 00:12:59.630 "data_offset": 2048, 00:12:59.630 "data_size": 63488 00:12:59.630 }, 00:12:59.630 { 00:12:59.630 "name": "pt2", 00:12:59.630 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:59.630 "is_configured": true, 00:12:59.630 "data_offset": 2048, 00:12:59.630 "data_size": 63488 00:12:59.630 }, 00:12:59.630 { 00:12:59.630 "name": "pt3", 00:12:59.630 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:59.630 "is_configured": true, 00:12:59.630 "data_offset": 2048, 00:12:59.630 "data_size": 63488 00:12:59.630 }, 00:12:59.630 { 00:12:59.630 "name": "pt4", 00:12:59.630 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:59.630 "is_configured": true, 00:12:59.630 "data_offset": 2048, 00:12:59.630 "data_size": 63488 00:12:59.630 } 00:12:59.630 ] 00:12:59.630 } 00:12:59.630 } 00:12:59.630 }' 00:12:59.630 19:11:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:59.889 19:11:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:59.889 pt2 00:12:59.889 pt3 00:12:59.889 pt4' 00:12:59.889 19:11:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:59.889 19:11:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:59.889 19:11:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:59.889 19:11:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:59.889 19:11:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:59.889 19:11:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.889 19:11:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.889 19:11:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.889 19:11:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:59.889 19:11:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:59.889 19:11:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:59.889 19:11:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:59.889 19:11:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:59.889 19:11:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.889 19:11:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.889 19:11:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.889 19:11:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:59.889 19:11:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:59.889 19:11:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:59.889 19:11:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:59.889 19:11:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:12:59.889 19:11:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.889 19:11:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.889 19:11:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.889 19:11:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:59.889 19:11:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:59.890 19:11:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:59.890 19:11:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:12:59.890 19:11:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.890 19:11:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.890 19:11:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:59.890 19:11:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.890 19:11:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:59.890 19:11:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:00.150 19:11:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:13:00.150 19:11:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:00.150 19:11:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.150 19:11:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.150 [2024-11-27 19:11:09.533071] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:00.150 19:11:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.150 19:11:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' c434608c-070f-4051-8a2b-b23a0f33d15a '!=' c434608c-070f-4051-8a2b-b23a0f33d15a ']' 00:13:00.150 19:11:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:13:00.150 19:11:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:00.150 19:11:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:13:00.150 19:11:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:13:00.150 19:11:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.150 19:11:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.150 [2024-11-27 19:11:09.568735] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:13:00.150 19:11:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.150 19:11:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:00.150 19:11:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:00.150 19:11:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:00.150 19:11:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:00.150 19:11:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:00.150 19:11:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:00.150 19:11:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:00.150 19:11:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:00.150 19:11:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:00.150 19:11:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:00.150 19:11:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:00.150 19:11:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:00.150 19:11:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.150 19:11:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.150 19:11:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.150 19:11:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:00.150 "name": "raid_bdev1", 00:13:00.150 "uuid": "c434608c-070f-4051-8a2b-b23a0f33d15a", 00:13:00.150 "strip_size_kb": 0, 00:13:00.150 "state": "online", 00:13:00.150 "raid_level": "raid1", 00:13:00.150 "superblock": true, 00:13:00.150 "num_base_bdevs": 4, 00:13:00.150 "num_base_bdevs_discovered": 3, 00:13:00.150 "num_base_bdevs_operational": 3, 00:13:00.150 "base_bdevs_list": [ 00:13:00.150 { 00:13:00.150 "name": null, 00:13:00.150 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:00.150 "is_configured": false, 00:13:00.150 "data_offset": 0, 00:13:00.150 "data_size": 63488 00:13:00.150 }, 00:13:00.150 { 00:13:00.150 "name": "pt2", 00:13:00.150 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:00.150 "is_configured": true, 00:13:00.150 "data_offset": 2048, 00:13:00.150 "data_size": 63488 00:13:00.150 }, 00:13:00.150 { 00:13:00.150 "name": "pt3", 00:13:00.150 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:00.150 "is_configured": true, 00:13:00.150 "data_offset": 2048, 00:13:00.150 "data_size": 63488 00:13:00.150 }, 00:13:00.150 { 00:13:00.150 "name": "pt4", 00:13:00.150 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:00.150 "is_configured": true, 00:13:00.150 "data_offset": 2048, 00:13:00.150 "data_size": 63488 00:13:00.150 } 00:13:00.150 ] 00:13:00.150 }' 00:13:00.150 19:11:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:00.150 19:11:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.410 19:11:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:00.410 19:11:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.410 19:11:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.410 [2024-11-27 19:11:09.963995] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:00.410 [2024-11-27 19:11:09.964038] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:00.410 [2024-11-27 19:11:09.964146] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:00.410 [2024-11-27 19:11:09.964236] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:00.410 [2024-11-27 19:11:09.964246] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:13:00.410 19:11:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.410 19:11:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:00.410 19:11:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.410 19:11:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.410 19:11:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:13:00.410 19:11:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.410 19:11:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:13:00.410 19:11:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:13:00.410 19:11:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:13:00.410 19:11:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:13:00.410 19:11:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:13:00.410 19:11:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.410 19:11:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.410 19:11:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.410 19:11:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:13:00.410 19:11:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:13:00.410 19:11:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:13:00.410 19:11:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.410 19:11:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.410 19:11:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.410 19:11:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:13:00.410 19:11:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:13:00.410 19:11:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:13:00.410 19:11:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.410 19:11:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.670 19:11:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.670 19:11:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:13:00.670 19:11:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:13:00.670 19:11:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:13:00.670 19:11:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:13:00.670 19:11:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:00.670 19:11:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.670 19:11:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.670 [2024-11-27 19:11:10.051808] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:00.670 [2024-11-27 19:11:10.051914] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:00.670 [2024-11-27 19:11:10.051939] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:13:00.670 [2024-11-27 19:11:10.051950] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:00.670 [2024-11-27 19:11:10.054573] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:00.670 [2024-11-27 19:11:10.054610] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:00.670 [2024-11-27 19:11:10.054713] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:00.670 [2024-11-27 19:11:10.054770] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:00.670 pt2 00:13:00.670 19:11:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.671 19:11:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:13:00.671 19:11:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:00.671 19:11:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:00.671 19:11:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:00.671 19:11:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:00.671 19:11:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:00.671 19:11:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:00.671 19:11:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:00.671 19:11:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:00.671 19:11:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:00.671 19:11:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:00.671 19:11:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:00.671 19:11:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.671 19:11:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.671 19:11:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.671 19:11:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:00.671 "name": "raid_bdev1", 00:13:00.671 "uuid": "c434608c-070f-4051-8a2b-b23a0f33d15a", 00:13:00.671 "strip_size_kb": 0, 00:13:00.671 "state": "configuring", 00:13:00.671 "raid_level": "raid1", 00:13:00.671 "superblock": true, 00:13:00.671 "num_base_bdevs": 4, 00:13:00.671 "num_base_bdevs_discovered": 1, 00:13:00.671 "num_base_bdevs_operational": 3, 00:13:00.671 "base_bdevs_list": [ 00:13:00.671 { 00:13:00.671 "name": null, 00:13:00.671 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:00.671 "is_configured": false, 00:13:00.671 "data_offset": 2048, 00:13:00.671 "data_size": 63488 00:13:00.671 }, 00:13:00.671 { 00:13:00.671 "name": "pt2", 00:13:00.671 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:00.671 "is_configured": true, 00:13:00.671 "data_offset": 2048, 00:13:00.671 "data_size": 63488 00:13:00.671 }, 00:13:00.671 { 00:13:00.671 "name": null, 00:13:00.671 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:00.671 "is_configured": false, 00:13:00.671 "data_offset": 2048, 00:13:00.671 "data_size": 63488 00:13:00.671 }, 00:13:00.671 { 00:13:00.671 "name": null, 00:13:00.671 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:00.671 "is_configured": false, 00:13:00.671 "data_offset": 2048, 00:13:00.671 "data_size": 63488 00:13:00.671 } 00:13:00.671 ] 00:13:00.671 }' 00:13:00.671 19:11:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:00.671 19:11:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.931 19:11:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:13:00.931 19:11:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:13:00.931 19:11:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:00.931 19:11:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.931 19:11:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.931 [2024-11-27 19:11:10.483110] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:00.931 [2024-11-27 19:11:10.483263] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:00.931 [2024-11-27 19:11:10.483314] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:13:00.931 [2024-11-27 19:11:10.483361] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:00.931 [2024-11-27 19:11:10.483956] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:00.931 [2024-11-27 19:11:10.484019] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:00.931 [2024-11-27 19:11:10.484159] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:13:00.931 [2024-11-27 19:11:10.484214] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:00.931 pt3 00:13:00.931 19:11:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.931 19:11:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:13:00.931 19:11:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:00.931 19:11:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:00.931 19:11:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:00.931 19:11:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:00.931 19:11:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:00.931 19:11:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:00.931 19:11:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:00.931 19:11:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:00.931 19:11:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:00.931 19:11:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:00.931 19:11:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:00.931 19:11:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.931 19:11:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.931 19:11:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.931 19:11:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:00.931 "name": "raid_bdev1", 00:13:00.931 "uuid": "c434608c-070f-4051-8a2b-b23a0f33d15a", 00:13:00.931 "strip_size_kb": 0, 00:13:00.931 "state": "configuring", 00:13:00.931 "raid_level": "raid1", 00:13:00.931 "superblock": true, 00:13:00.931 "num_base_bdevs": 4, 00:13:00.931 "num_base_bdevs_discovered": 2, 00:13:00.931 "num_base_bdevs_operational": 3, 00:13:00.931 "base_bdevs_list": [ 00:13:00.931 { 00:13:00.931 "name": null, 00:13:00.931 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:00.931 "is_configured": false, 00:13:00.931 "data_offset": 2048, 00:13:00.931 "data_size": 63488 00:13:00.931 }, 00:13:00.931 { 00:13:00.931 "name": "pt2", 00:13:00.931 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:00.931 "is_configured": true, 00:13:00.931 "data_offset": 2048, 00:13:00.931 "data_size": 63488 00:13:00.931 }, 00:13:00.931 { 00:13:00.931 "name": "pt3", 00:13:00.931 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:00.932 "is_configured": true, 00:13:00.932 "data_offset": 2048, 00:13:00.932 "data_size": 63488 00:13:00.932 }, 00:13:00.932 { 00:13:00.932 "name": null, 00:13:00.932 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:00.932 "is_configured": false, 00:13:00.932 "data_offset": 2048, 00:13:00.932 "data_size": 63488 00:13:00.932 } 00:13:00.932 ] 00:13:00.932 }' 00:13:00.932 19:11:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:00.932 19:11:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.502 19:11:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:13:01.502 19:11:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:13:01.502 19:11:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:13:01.502 19:11:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:13:01.502 19:11:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.502 19:11:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.502 [2024-11-27 19:11:10.926365] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:13:01.502 [2024-11-27 19:11:10.926550] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:01.502 [2024-11-27 19:11:10.926590] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:13:01.502 [2024-11-27 19:11:10.926602] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:01.502 [2024-11-27 19:11:10.927188] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:01.502 [2024-11-27 19:11:10.927208] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:13:01.502 [2024-11-27 19:11:10.927337] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:13:01.502 [2024-11-27 19:11:10.927368] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:13:01.502 [2024-11-27 19:11:10.927532] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:13:01.502 [2024-11-27 19:11:10.927541] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:01.502 [2024-11-27 19:11:10.927852] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:13:01.502 [2024-11-27 19:11:10.928045] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:13:01.502 [2024-11-27 19:11:10.928059] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:13:01.502 [2024-11-27 19:11:10.928207] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:01.502 pt4 00:13:01.502 19:11:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.502 19:11:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:01.502 19:11:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:01.502 19:11:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:01.502 19:11:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:01.502 19:11:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:01.502 19:11:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:01.502 19:11:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:01.502 19:11:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:01.502 19:11:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:01.502 19:11:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:01.502 19:11:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:01.502 19:11:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:01.502 19:11:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.502 19:11:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.502 19:11:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.502 19:11:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:01.502 "name": "raid_bdev1", 00:13:01.502 "uuid": "c434608c-070f-4051-8a2b-b23a0f33d15a", 00:13:01.502 "strip_size_kb": 0, 00:13:01.502 "state": "online", 00:13:01.502 "raid_level": "raid1", 00:13:01.502 "superblock": true, 00:13:01.502 "num_base_bdevs": 4, 00:13:01.502 "num_base_bdevs_discovered": 3, 00:13:01.502 "num_base_bdevs_operational": 3, 00:13:01.502 "base_bdevs_list": [ 00:13:01.502 { 00:13:01.502 "name": null, 00:13:01.502 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:01.502 "is_configured": false, 00:13:01.502 "data_offset": 2048, 00:13:01.502 "data_size": 63488 00:13:01.502 }, 00:13:01.502 { 00:13:01.502 "name": "pt2", 00:13:01.502 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:01.502 "is_configured": true, 00:13:01.502 "data_offset": 2048, 00:13:01.502 "data_size": 63488 00:13:01.502 }, 00:13:01.502 { 00:13:01.502 "name": "pt3", 00:13:01.502 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:01.503 "is_configured": true, 00:13:01.503 "data_offset": 2048, 00:13:01.503 "data_size": 63488 00:13:01.503 }, 00:13:01.503 { 00:13:01.503 "name": "pt4", 00:13:01.503 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:01.503 "is_configured": true, 00:13:01.503 "data_offset": 2048, 00:13:01.503 "data_size": 63488 00:13:01.503 } 00:13:01.503 ] 00:13:01.503 }' 00:13:01.503 19:11:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:01.503 19:11:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.763 19:11:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:01.763 19:11:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.763 19:11:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.763 [2024-11-27 19:11:11.369586] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:01.763 [2024-11-27 19:11:11.369716] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:01.763 [2024-11-27 19:11:11.369845] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:01.763 [2024-11-27 19:11:11.369947] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:01.763 [2024-11-27 19:11:11.370001] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:13:01.763 19:11:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.763 19:11:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:01.763 19:11:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.763 19:11:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.763 19:11:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:13:01.763 19:11:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.024 19:11:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:13:02.024 19:11:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:13:02.024 19:11:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:13:02.024 19:11:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:13:02.024 19:11:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:13:02.024 19:11:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.024 19:11:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.024 19:11:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.024 19:11:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:02.024 19:11:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.024 19:11:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.024 [2024-11-27 19:11:11.445417] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:02.024 [2024-11-27 19:11:11.445520] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:02.024 [2024-11-27 19:11:11.445538] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:13:02.024 [2024-11-27 19:11:11.445553] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:02.024 [2024-11-27 19:11:11.448136] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:02.024 [2024-11-27 19:11:11.448179] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:02.024 [2024-11-27 19:11:11.448269] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:13:02.024 [2024-11-27 19:11:11.448318] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:02.024 [2024-11-27 19:11:11.448464] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:13:02.024 [2024-11-27 19:11:11.448478] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:02.024 [2024-11-27 19:11:11.448494] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:13:02.024 [2024-11-27 19:11:11.448578] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:02.024 [2024-11-27 19:11:11.448681] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:02.024 pt1 00:13:02.024 19:11:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.024 19:11:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:13:02.024 19:11:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:13:02.024 19:11:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:02.024 19:11:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:02.024 19:11:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:02.024 19:11:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:02.024 19:11:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:02.024 19:11:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:02.024 19:11:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:02.024 19:11:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:02.024 19:11:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:02.024 19:11:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:02.024 19:11:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:02.024 19:11:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.024 19:11:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.024 19:11:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.024 19:11:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:02.024 "name": "raid_bdev1", 00:13:02.024 "uuid": "c434608c-070f-4051-8a2b-b23a0f33d15a", 00:13:02.024 "strip_size_kb": 0, 00:13:02.024 "state": "configuring", 00:13:02.024 "raid_level": "raid1", 00:13:02.024 "superblock": true, 00:13:02.024 "num_base_bdevs": 4, 00:13:02.024 "num_base_bdevs_discovered": 2, 00:13:02.024 "num_base_bdevs_operational": 3, 00:13:02.024 "base_bdevs_list": [ 00:13:02.024 { 00:13:02.024 "name": null, 00:13:02.024 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:02.024 "is_configured": false, 00:13:02.024 "data_offset": 2048, 00:13:02.024 "data_size": 63488 00:13:02.024 }, 00:13:02.024 { 00:13:02.024 "name": "pt2", 00:13:02.024 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:02.024 "is_configured": true, 00:13:02.024 "data_offset": 2048, 00:13:02.024 "data_size": 63488 00:13:02.024 }, 00:13:02.024 { 00:13:02.024 "name": "pt3", 00:13:02.024 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:02.024 "is_configured": true, 00:13:02.024 "data_offset": 2048, 00:13:02.024 "data_size": 63488 00:13:02.024 }, 00:13:02.024 { 00:13:02.024 "name": null, 00:13:02.024 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:02.024 "is_configured": false, 00:13:02.024 "data_offset": 2048, 00:13:02.024 "data_size": 63488 00:13:02.024 } 00:13:02.024 ] 00:13:02.024 }' 00:13:02.024 19:11:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:02.024 19:11:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.286 19:11:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:13:02.286 19:11:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:13:02.286 19:11:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.286 19:11:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.286 19:11:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.286 19:11:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:13:02.286 19:11:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:13:02.286 19:11:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.286 19:11:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.286 [2024-11-27 19:11:11.892810] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:13:02.286 [2024-11-27 19:11:11.892956] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:02.286 [2024-11-27 19:11:11.893000] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:13:02.286 [2024-11-27 19:11:11.893029] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:02.286 [2024-11-27 19:11:11.893572] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:02.286 [2024-11-27 19:11:11.893632] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:13:02.286 [2024-11-27 19:11:11.893775] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:13:02.286 [2024-11-27 19:11:11.893831] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:13:02.286 [2024-11-27 19:11:11.894010] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:13:02.286 [2024-11-27 19:11:11.894049] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:02.286 [2024-11-27 19:11:11.894344] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:13:02.286 [2024-11-27 19:11:11.894537] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:13:02.286 [2024-11-27 19:11:11.894580] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:13:02.286 [2024-11-27 19:11:11.894779] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:02.286 pt4 00:13:02.286 19:11:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.286 19:11:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:02.286 19:11:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:02.286 19:11:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:02.286 19:11:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:02.286 19:11:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:02.286 19:11:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:02.286 19:11:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:02.286 19:11:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:02.286 19:11:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:02.286 19:11:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:02.286 19:11:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:02.286 19:11:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:02.286 19:11:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.286 19:11:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.547 19:11:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.547 19:11:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:02.547 "name": "raid_bdev1", 00:13:02.547 "uuid": "c434608c-070f-4051-8a2b-b23a0f33d15a", 00:13:02.547 "strip_size_kb": 0, 00:13:02.547 "state": "online", 00:13:02.547 "raid_level": "raid1", 00:13:02.547 "superblock": true, 00:13:02.547 "num_base_bdevs": 4, 00:13:02.547 "num_base_bdevs_discovered": 3, 00:13:02.547 "num_base_bdevs_operational": 3, 00:13:02.547 "base_bdevs_list": [ 00:13:02.547 { 00:13:02.547 "name": null, 00:13:02.547 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:02.547 "is_configured": false, 00:13:02.547 "data_offset": 2048, 00:13:02.547 "data_size": 63488 00:13:02.547 }, 00:13:02.547 { 00:13:02.547 "name": "pt2", 00:13:02.547 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:02.547 "is_configured": true, 00:13:02.547 "data_offset": 2048, 00:13:02.547 "data_size": 63488 00:13:02.547 }, 00:13:02.547 { 00:13:02.547 "name": "pt3", 00:13:02.547 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:02.547 "is_configured": true, 00:13:02.547 "data_offset": 2048, 00:13:02.547 "data_size": 63488 00:13:02.547 }, 00:13:02.547 { 00:13:02.547 "name": "pt4", 00:13:02.547 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:02.547 "is_configured": true, 00:13:02.547 "data_offset": 2048, 00:13:02.547 "data_size": 63488 00:13:02.547 } 00:13:02.547 ] 00:13:02.547 }' 00:13:02.547 19:11:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:02.547 19:11:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.807 19:11:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:13:02.807 19:11:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:13:02.807 19:11:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.807 19:11:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.807 19:11:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.807 19:11:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:13:02.807 19:11:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:02.807 19:11:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.807 19:11:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.807 19:11:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:13:02.807 [2024-11-27 19:11:12.408207] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:02.807 19:11:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.066 19:11:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' c434608c-070f-4051-8a2b-b23a0f33d15a '!=' c434608c-070f-4051-8a2b-b23a0f33d15a ']' 00:13:03.066 19:11:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 74632 00:13:03.066 19:11:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 74632 ']' 00:13:03.066 19:11:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 74632 00:13:03.066 19:11:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:13:03.066 19:11:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:03.066 19:11:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74632 00:13:03.066 killing process with pid 74632 00:13:03.066 19:11:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:03.066 19:11:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:03.066 19:11:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74632' 00:13:03.066 19:11:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 74632 00:13:03.066 [2024-11-27 19:11:12.491171] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:03.066 [2024-11-27 19:11:12.491281] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:03.066 [2024-11-27 19:11:12.491394] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:03.066 19:11:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 74632 00:13:03.066 [2024-11-27 19:11:12.491408] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:13:03.326 [2024-11-27 19:11:12.926932] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:04.707 19:11:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:13:04.707 00:13:04.707 real 0m8.670s 00:13:04.707 user 0m13.360s 00:13:04.707 sys 0m1.736s 00:13:04.707 19:11:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:04.707 ************************************ 00:13:04.707 END TEST raid_superblock_test 00:13:04.707 ************************************ 00:13:04.707 19:11:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.707 19:11:14 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 4 read 00:13:04.707 19:11:14 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:04.707 19:11:14 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:04.707 19:11:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:04.707 ************************************ 00:13:04.707 START TEST raid_read_error_test 00:13:04.707 ************************************ 00:13:04.707 19:11:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 4 read 00:13:04.707 19:11:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:13:04.707 19:11:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:13:04.708 19:11:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:13:04.708 19:11:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:13:04.708 19:11:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:04.708 19:11:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:13:04.708 19:11:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:04.708 19:11:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:04.708 19:11:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:13:04.708 19:11:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:04.708 19:11:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:04.708 19:11:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:13:04.708 19:11:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:04.708 19:11:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:04.708 19:11:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:13:04.708 19:11:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:04.708 19:11:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:04.708 19:11:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:04.708 19:11:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:13:04.708 19:11:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:13:04.708 19:11:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:13:04.708 19:11:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:13:04.708 19:11:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:13:04.708 19:11:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:13:04.708 19:11:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:13:04.708 19:11:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:13:04.708 19:11:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:13:04.708 19:11:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.ySp9lo6KIt 00:13:04.708 19:11:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=75119 00:13:04.708 19:11:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:13:04.708 19:11:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 75119 00:13:04.708 19:11:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 75119 ']' 00:13:04.708 19:11:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:04.708 19:11:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:04.708 19:11:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:04.708 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:04.708 19:11:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:04.708 19:11:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.708 [2024-11-27 19:11:14.338203] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:13:04.708 [2024-11-27 19:11:14.338780] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75119 ] 00:13:04.967 [2024-11-27 19:11:14.514049] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:05.227 [2024-11-27 19:11:14.655250] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:05.487 [2024-11-27 19:11:14.892146] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:05.487 [2024-11-27 19:11:14.892299] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:05.747 19:11:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:05.748 19:11:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:13:05.748 19:11:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:05.748 19:11:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:05.748 19:11:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.748 19:11:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.748 BaseBdev1_malloc 00:13:05.748 19:11:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.748 19:11:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:13:05.748 19:11:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.748 19:11:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.748 true 00:13:05.748 19:11:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.748 19:11:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:13:05.748 19:11:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.748 19:11:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.748 [2024-11-27 19:11:15.224209] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:13:05.748 [2024-11-27 19:11:15.224268] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:05.748 [2024-11-27 19:11:15.224288] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:13:05.748 [2024-11-27 19:11:15.224300] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:05.748 [2024-11-27 19:11:15.226720] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:05.748 [2024-11-27 19:11:15.226755] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:05.748 BaseBdev1 00:13:05.748 19:11:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.748 19:11:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:05.748 19:11:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:05.748 19:11:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.748 19:11:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.748 BaseBdev2_malloc 00:13:05.748 19:11:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.748 19:11:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:13:05.748 19:11:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.748 19:11:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.748 true 00:13:05.748 19:11:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.748 19:11:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:13:05.748 19:11:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.748 19:11:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.748 [2024-11-27 19:11:15.299031] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:13:05.748 [2024-11-27 19:11:15.299104] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:05.748 [2024-11-27 19:11:15.299126] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:13:05.748 [2024-11-27 19:11:15.299138] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:05.748 [2024-11-27 19:11:15.301766] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:05.748 [2024-11-27 19:11:15.301805] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:05.748 BaseBdev2 00:13:05.748 19:11:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.748 19:11:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:05.748 19:11:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:05.748 19:11:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.748 19:11:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.748 BaseBdev3_malloc 00:13:05.748 19:11:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.748 19:11:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:13:05.748 19:11:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.748 19:11:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.748 true 00:13:05.748 19:11:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.748 19:11:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:13:05.748 19:11:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.748 19:11:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.009 [2024-11-27 19:11:15.387231] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:13:06.009 [2024-11-27 19:11:15.387340] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:06.009 [2024-11-27 19:11:15.387378] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:13:06.009 [2024-11-27 19:11:15.387390] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:06.009 [2024-11-27 19:11:15.389831] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:06.009 [2024-11-27 19:11:15.389866] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:06.009 BaseBdev3 00:13:06.009 19:11:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.009 19:11:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:06.009 19:11:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:06.009 19:11:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.009 19:11:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.009 BaseBdev4_malloc 00:13:06.009 19:11:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.009 19:11:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:13:06.009 19:11:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.009 19:11:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.009 true 00:13:06.009 19:11:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.009 19:11:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:13:06.009 19:11:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.009 19:11:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.009 [2024-11-27 19:11:15.460164] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:13:06.009 [2024-11-27 19:11:15.460263] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:06.009 [2024-11-27 19:11:15.460284] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:13:06.009 [2024-11-27 19:11:15.460296] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:06.009 [2024-11-27 19:11:15.462644] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:06.009 [2024-11-27 19:11:15.462684] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:06.009 BaseBdev4 00:13:06.009 19:11:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.009 19:11:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:13:06.009 19:11:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.009 19:11:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.009 [2024-11-27 19:11:15.472202] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:06.009 [2024-11-27 19:11:15.474283] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:06.009 [2024-11-27 19:11:15.474358] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:06.009 [2024-11-27 19:11:15.474420] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:06.009 [2024-11-27 19:11:15.474665] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:13:06.009 [2024-11-27 19:11:15.474678] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:06.009 [2024-11-27 19:11:15.474957] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:13:06.009 [2024-11-27 19:11:15.475153] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:13:06.009 [2024-11-27 19:11:15.475168] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:13:06.009 [2024-11-27 19:11:15.475320] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:06.009 19:11:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.009 19:11:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:13:06.009 19:11:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:06.009 19:11:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:06.009 19:11:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:06.009 19:11:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:06.009 19:11:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:06.009 19:11:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:06.009 19:11:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:06.009 19:11:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:06.009 19:11:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:06.009 19:11:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:06.009 19:11:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:06.009 19:11:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.009 19:11:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.009 19:11:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.009 19:11:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:06.009 "name": "raid_bdev1", 00:13:06.009 "uuid": "cf8911bc-0fa3-45d7-bec9-46f6b8e8fc0c", 00:13:06.009 "strip_size_kb": 0, 00:13:06.009 "state": "online", 00:13:06.009 "raid_level": "raid1", 00:13:06.009 "superblock": true, 00:13:06.009 "num_base_bdevs": 4, 00:13:06.009 "num_base_bdevs_discovered": 4, 00:13:06.009 "num_base_bdevs_operational": 4, 00:13:06.009 "base_bdevs_list": [ 00:13:06.009 { 00:13:06.009 "name": "BaseBdev1", 00:13:06.009 "uuid": "144c0219-f149-57f6-ac01-8c25557c6aec", 00:13:06.009 "is_configured": true, 00:13:06.009 "data_offset": 2048, 00:13:06.009 "data_size": 63488 00:13:06.009 }, 00:13:06.009 { 00:13:06.009 "name": "BaseBdev2", 00:13:06.009 "uuid": "d821bc07-8202-530d-a5a8-f9d2818043b8", 00:13:06.009 "is_configured": true, 00:13:06.009 "data_offset": 2048, 00:13:06.009 "data_size": 63488 00:13:06.009 }, 00:13:06.009 { 00:13:06.009 "name": "BaseBdev3", 00:13:06.009 "uuid": "23cc902b-1fb7-5a61-be45-9d9b55c3d793", 00:13:06.009 "is_configured": true, 00:13:06.009 "data_offset": 2048, 00:13:06.009 "data_size": 63488 00:13:06.009 }, 00:13:06.009 { 00:13:06.009 "name": "BaseBdev4", 00:13:06.009 "uuid": "e9be34ef-69c0-5070-9cae-7b9d8d47d967", 00:13:06.009 "is_configured": true, 00:13:06.009 "data_offset": 2048, 00:13:06.009 "data_size": 63488 00:13:06.009 } 00:13:06.009 ] 00:13:06.009 }' 00:13:06.009 19:11:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:06.009 19:11:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.579 19:11:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:13:06.579 19:11:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:06.579 [2024-11-27 19:11:16.020828] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:13:07.519 19:11:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:13:07.520 19:11:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.520 19:11:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.520 19:11:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.520 19:11:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:13:07.520 19:11:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:13:07.520 19:11:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:13:07.520 19:11:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:13:07.520 19:11:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:13:07.520 19:11:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:07.520 19:11:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:07.520 19:11:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:07.520 19:11:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:07.520 19:11:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:07.520 19:11:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:07.520 19:11:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:07.520 19:11:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:07.520 19:11:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:07.520 19:11:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:07.520 19:11:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:07.520 19:11:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.520 19:11:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.520 19:11:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.520 19:11:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:07.520 "name": "raid_bdev1", 00:13:07.520 "uuid": "cf8911bc-0fa3-45d7-bec9-46f6b8e8fc0c", 00:13:07.520 "strip_size_kb": 0, 00:13:07.520 "state": "online", 00:13:07.520 "raid_level": "raid1", 00:13:07.520 "superblock": true, 00:13:07.520 "num_base_bdevs": 4, 00:13:07.520 "num_base_bdevs_discovered": 4, 00:13:07.520 "num_base_bdevs_operational": 4, 00:13:07.520 "base_bdevs_list": [ 00:13:07.520 { 00:13:07.520 "name": "BaseBdev1", 00:13:07.520 "uuid": "144c0219-f149-57f6-ac01-8c25557c6aec", 00:13:07.520 "is_configured": true, 00:13:07.520 "data_offset": 2048, 00:13:07.520 "data_size": 63488 00:13:07.520 }, 00:13:07.520 { 00:13:07.520 "name": "BaseBdev2", 00:13:07.520 "uuid": "d821bc07-8202-530d-a5a8-f9d2818043b8", 00:13:07.520 "is_configured": true, 00:13:07.520 "data_offset": 2048, 00:13:07.520 "data_size": 63488 00:13:07.520 }, 00:13:07.520 { 00:13:07.520 "name": "BaseBdev3", 00:13:07.520 "uuid": "23cc902b-1fb7-5a61-be45-9d9b55c3d793", 00:13:07.520 "is_configured": true, 00:13:07.520 "data_offset": 2048, 00:13:07.520 "data_size": 63488 00:13:07.520 }, 00:13:07.520 { 00:13:07.520 "name": "BaseBdev4", 00:13:07.520 "uuid": "e9be34ef-69c0-5070-9cae-7b9d8d47d967", 00:13:07.520 "is_configured": true, 00:13:07.520 "data_offset": 2048, 00:13:07.520 "data_size": 63488 00:13:07.520 } 00:13:07.520 ] 00:13:07.520 }' 00:13:07.520 19:11:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:07.520 19:11:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.782 19:11:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:07.782 19:11:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.782 19:11:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.782 [2024-11-27 19:11:17.399029] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:07.782 [2024-11-27 19:11:17.399129] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:07.782 [2024-11-27 19:11:17.402080] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:07.782 [2024-11-27 19:11:17.402206] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:07.782 [2024-11-27 19:11:17.402359] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:07.782 [2024-11-27 19:11:17.402408] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:13:07.782 { 00:13:07.782 "results": [ 00:13:07.782 { 00:13:07.782 "job": "raid_bdev1", 00:13:07.782 "core_mask": "0x1", 00:13:07.782 "workload": "randrw", 00:13:07.782 "percentage": 50, 00:13:07.782 "status": "finished", 00:13:07.782 "queue_depth": 1, 00:13:07.782 "io_size": 131072, 00:13:07.782 "runtime": 1.378887, 00:13:07.782 "iops": 7670.679323251289, 00:13:07.782 "mibps": 958.8349154064111, 00:13:07.782 "io_failed": 0, 00:13:07.782 "io_timeout": 0, 00:13:07.782 "avg_latency_us": 127.73118833689149, 00:13:07.782 "min_latency_us": 23.811353711790392, 00:13:07.782 "max_latency_us": 1516.7720524017468 00:13:07.782 } 00:13:07.782 ], 00:13:07.782 "core_count": 1 00:13:07.782 } 00:13:07.782 19:11:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.782 19:11:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 75119 00:13:07.782 19:11:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 75119 ']' 00:13:07.782 19:11:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 75119 00:13:07.782 19:11:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:13:07.782 19:11:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:08.062 19:11:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75119 00:13:08.062 19:11:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:08.062 killing process with pid 75119 00:13:08.062 19:11:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:08.062 19:11:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75119' 00:13:08.062 19:11:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 75119 00:13:08.062 [2024-11-27 19:11:17.450195] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:08.062 19:11:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 75119 00:13:08.322 [2024-11-27 19:11:17.809451] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:09.701 19:11:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:13:09.701 19:11:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.ySp9lo6KIt 00:13:09.701 19:11:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:13:09.701 19:11:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:13:09.701 19:11:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:13:09.701 19:11:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:09.701 19:11:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:13:09.701 19:11:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:13:09.701 00:13:09.701 real 0m4.871s 00:13:09.701 user 0m5.610s 00:13:09.701 sys 0m0.675s 00:13:09.701 19:11:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:09.701 ************************************ 00:13:09.701 END TEST raid_read_error_test 00:13:09.701 ************************************ 00:13:09.701 19:11:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.701 19:11:19 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 4 write 00:13:09.701 19:11:19 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:09.701 19:11:19 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:09.701 19:11:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:09.701 ************************************ 00:13:09.701 START TEST raid_write_error_test 00:13:09.701 ************************************ 00:13:09.701 19:11:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 4 write 00:13:09.701 19:11:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:13:09.701 19:11:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:13:09.701 19:11:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:13:09.701 19:11:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:13:09.701 19:11:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:09.701 19:11:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:13:09.701 19:11:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:09.701 19:11:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:09.701 19:11:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:13:09.701 19:11:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:09.701 19:11:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:09.701 19:11:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:13:09.701 19:11:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:09.701 19:11:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:09.701 19:11:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:13:09.701 19:11:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:09.701 19:11:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:09.701 19:11:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:09.701 19:11:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:13:09.701 19:11:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:13:09.701 19:11:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:13:09.701 19:11:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:13:09.701 19:11:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:13:09.701 19:11:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:13:09.701 19:11:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:13:09.701 19:11:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:13:09.701 19:11:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:13:09.701 19:11:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.S9JVHBf2BG 00:13:09.701 19:11:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=75270 00:13:09.701 19:11:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:13:09.701 19:11:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 75270 00:13:09.701 19:11:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 75270 ']' 00:13:09.701 19:11:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:09.701 19:11:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:09.701 19:11:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:09.701 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:09.701 19:11:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:09.701 19:11:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.701 [2024-11-27 19:11:19.278656] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:13:09.701 [2024-11-27 19:11:19.278791] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75270 ] 00:13:09.960 [2024-11-27 19:11:19.452016] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:09.960 [2024-11-27 19:11:19.590942] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:10.220 [2024-11-27 19:11:19.823144] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:10.220 [2024-11-27 19:11:19.823228] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:10.480 19:11:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:10.480 19:11:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:13:10.480 19:11:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:10.480 19:11:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:10.480 19:11:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.480 19:11:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.741 BaseBdev1_malloc 00:13:10.741 19:11:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.741 19:11:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:13:10.741 19:11:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.741 19:11:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.741 true 00:13:10.741 19:11:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.741 19:11:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:13:10.741 19:11:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.741 19:11:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.741 [2024-11-27 19:11:20.173412] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:13:10.741 [2024-11-27 19:11:20.173529] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:10.741 [2024-11-27 19:11:20.173555] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:13:10.741 [2024-11-27 19:11:20.173567] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:10.741 [2024-11-27 19:11:20.176105] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:10.741 [2024-11-27 19:11:20.176146] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:10.741 BaseBdev1 00:13:10.741 19:11:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.741 19:11:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:10.741 19:11:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:10.741 19:11:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.741 19:11:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.741 BaseBdev2_malloc 00:13:10.741 19:11:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.741 19:11:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:13:10.741 19:11:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.741 19:11:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.741 true 00:13:10.741 19:11:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.741 19:11:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:13:10.741 19:11:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.741 19:11:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.741 [2024-11-27 19:11:20.245563] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:13:10.741 [2024-11-27 19:11:20.245628] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:10.741 [2024-11-27 19:11:20.245645] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:13:10.741 [2024-11-27 19:11:20.245657] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:10.741 [2024-11-27 19:11:20.248101] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:10.741 [2024-11-27 19:11:20.248142] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:10.741 BaseBdev2 00:13:10.741 19:11:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.741 19:11:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:10.741 19:11:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:10.741 19:11:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.741 19:11:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.741 BaseBdev3_malloc 00:13:10.741 19:11:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.741 19:11:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:13:10.741 19:11:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.741 19:11:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.741 true 00:13:10.741 19:11:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.741 19:11:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:13:10.741 19:11:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.741 19:11:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.741 [2024-11-27 19:11:20.331361] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:13:10.741 [2024-11-27 19:11:20.331482] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:10.741 [2024-11-27 19:11:20.331507] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:13:10.741 [2024-11-27 19:11:20.331537] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:10.741 [2024-11-27 19:11:20.334063] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:10.741 [2024-11-27 19:11:20.334103] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:10.741 BaseBdev3 00:13:10.741 19:11:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.741 19:11:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:10.741 19:11:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:10.741 19:11:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.741 19:11:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.002 BaseBdev4_malloc 00:13:11.002 19:11:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.002 19:11:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:13:11.002 19:11:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.002 19:11:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.002 true 00:13:11.002 19:11:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.002 19:11:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:13:11.002 19:11:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.002 19:11:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.002 [2024-11-27 19:11:20.406523] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:13:11.002 [2024-11-27 19:11:20.406587] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:11.002 [2024-11-27 19:11:20.406608] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:13:11.002 [2024-11-27 19:11:20.406619] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:11.002 [2024-11-27 19:11:20.409308] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:11.002 [2024-11-27 19:11:20.409353] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:11.002 BaseBdev4 00:13:11.002 19:11:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.002 19:11:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:13:11.002 19:11:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.002 19:11:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.002 [2024-11-27 19:11:20.418564] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:11.002 [2024-11-27 19:11:20.420818] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:11.002 [2024-11-27 19:11:20.420890] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:11.002 [2024-11-27 19:11:20.420948] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:11.002 [2024-11-27 19:11:20.421175] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:13:11.002 [2024-11-27 19:11:20.421190] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:11.002 [2024-11-27 19:11:20.421438] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:13:11.002 [2024-11-27 19:11:20.421612] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:13:11.002 [2024-11-27 19:11:20.421622] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:13:11.002 [2024-11-27 19:11:20.421872] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:11.002 19:11:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.002 19:11:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:13:11.002 19:11:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:11.002 19:11:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:11.002 19:11:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:11.002 19:11:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:11.002 19:11:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:11.002 19:11:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:11.002 19:11:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:11.002 19:11:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:11.002 19:11:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:11.002 19:11:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:11.002 19:11:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:11.002 19:11:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.002 19:11:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.002 19:11:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.002 19:11:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:11.002 "name": "raid_bdev1", 00:13:11.002 "uuid": "046ee155-18af-4dc7-8155-e3c1307e2ca4", 00:13:11.002 "strip_size_kb": 0, 00:13:11.002 "state": "online", 00:13:11.002 "raid_level": "raid1", 00:13:11.002 "superblock": true, 00:13:11.002 "num_base_bdevs": 4, 00:13:11.002 "num_base_bdevs_discovered": 4, 00:13:11.002 "num_base_bdevs_operational": 4, 00:13:11.002 "base_bdevs_list": [ 00:13:11.002 { 00:13:11.002 "name": "BaseBdev1", 00:13:11.002 "uuid": "2093eb5f-2973-5b74-95dd-fac95f2dd1c6", 00:13:11.002 "is_configured": true, 00:13:11.002 "data_offset": 2048, 00:13:11.002 "data_size": 63488 00:13:11.002 }, 00:13:11.002 { 00:13:11.002 "name": "BaseBdev2", 00:13:11.002 "uuid": "70b0a568-2bd5-523e-8c47-538115baf1f9", 00:13:11.002 "is_configured": true, 00:13:11.002 "data_offset": 2048, 00:13:11.002 "data_size": 63488 00:13:11.002 }, 00:13:11.002 { 00:13:11.002 "name": "BaseBdev3", 00:13:11.002 "uuid": "6403746b-4c3b-5900-ac93-59ed62a4bd94", 00:13:11.002 "is_configured": true, 00:13:11.002 "data_offset": 2048, 00:13:11.002 "data_size": 63488 00:13:11.002 }, 00:13:11.002 { 00:13:11.002 "name": "BaseBdev4", 00:13:11.002 "uuid": "ba9dc762-6fa5-53a9-bd31-abd64f948eb0", 00:13:11.002 "is_configured": true, 00:13:11.002 "data_offset": 2048, 00:13:11.002 "data_size": 63488 00:13:11.002 } 00:13:11.002 ] 00:13:11.002 }' 00:13:11.002 19:11:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:11.002 19:11:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.572 19:11:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:13:11.572 19:11:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:11.572 [2024-11-27 19:11:21.019084] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:13:12.512 19:11:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:13:12.512 19:11:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.512 19:11:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.512 [2024-11-27 19:11:21.911852] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:13:12.512 [2024-11-27 19:11:21.911925] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:12.513 [2024-11-27 19:11:21.912196] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006a40 00:13:12.513 19:11:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.513 19:11:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:13:12.513 19:11:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:13:12.513 19:11:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:13:12.513 19:11:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:13:12.513 19:11:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:12.513 19:11:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:12.513 19:11:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:12.513 19:11:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:12.513 19:11:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:12.513 19:11:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:12.513 19:11:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:12.513 19:11:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:12.513 19:11:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:12.513 19:11:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:12.513 19:11:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:12.513 19:11:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:12.513 19:11:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.513 19:11:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.513 19:11:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.513 19:11:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:12.513 "name": "raid_bdev1", 00:13:12.513 "uuid": "046ee155-18af-4dc7-8155-e3c1307e2ca4", 00:13:12.513 "strip_size_kb": 0, 00:13:12.513 "state": "online", 00:13:12.513 "raid_level": "raid1", 00:13:12.513 "superblock": true, 00:13:12.513 "num_base_bdevs": 4, 00:13:12.513 "num_base_bdevs_discovered": 3, 00:13:12.513 "num_base_bdevs_operational": 3, 00:13:12.513 "base_bdevs_list": [ 00:13:12.513 { 00:13:12.513 "name": null, 00:13:12.513 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:12.513 "is_configured": false, 00:13:12.513 "data_offset": 0, 00:13:12.513 "data_size": 63488 00:13:12.513 }, 00:13:12.513 { 00:13:12.513 "name": "BaseBdev2", 00:13:12.513 "uuid": "70b0a568-2bd5-523e-8c47-538115baf1f9", 00:13:12.513 "is_configured": true, 00:13:12.513 "data_offset": 2048, 00:13:12.513 "data_size": 63488 00:13:12.513 }, 00:13:12.513 { 00:13:12.513 "name": "BaseBdev3", 00:13:12.513 "uuid": "6403746b-4c3b-5900-ac93-59ed62a4bd94", 00:13:12.513 "is_configured": true, 00:13:12.513 "data_offset": 2048, 00:13:12.513 "data_size": 63488 00:13:12.513 }, 00:13:12.513 { 00:13:12.513 "name": "BaseBdev4", 00:13:12.513 "uuid": "ba9dc762-6fa5-53a9-bd31-abd64f948eb0", 00:13:12.513 "is_configured": true, 00:13:12.513 "data_offset": 2048, 00:13:12.513 "data_size": 63488 00:13:12.513 } 00:13:12.513 ] 00:13:12.513 }' 00:13:12.513 19:11:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:12.513 19:11:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.773 19:11:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:12.774 19:11:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.774 19:11:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.774 [2024-11-27 19:11:22.325686] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:12.774 [2024-11-27 19:11:22.325739] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:12.774 [2024-11-27 19:11:22.328874] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:12.774 [2024-11-27 19:11:22.328956] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:12.774 [2024-11-27 19:11:22.329110] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:12.774 [2024-11-27 19:11:22.329163] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:13:12.774 { 00:13:12.774 "results": [ 00:13:12.774 { 00:13:12.774 "job": "raid_bdev1", 00:13:12.774 "core_mask": "0x1", 00:13:12.774 "workload": "randrw", 00:13:12.774 "percentage": 50, 00:13:12.774 "status": "finished", 00:13:12.774 "queue_depth": 1, 00:13:12.774 "io_size": 131072, 00:13:12.774 "runtime": 1.307098, 00:13:12.774 "iops": 8495.919969275448, 00:13:12.774 "mibps": 1061.989996159431, 00:13:12.774 "io_failed": 0, 00:13:12.774 "io_timeout": 0, 00:13:12.774 "avg_latency_us": 115.04182238222289, 00:13:12.774 "min_latency_us": 23.252401746724892, 00:13:12.774 "max_latency_us": 1516.7720524017468 00:13:12.774 } 00:13:12.774 ], 00:13:12.774 "core_count": 1 00:13:12.774 } 00:13:12.774 19:11:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.774 19:11:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 75270 00:13:12.774 19:11:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 75270 ']' 00:13:12.774 19:11:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 75270 00:13:12.774 19:11:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:13:12.774 19:11:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:12.774 19:11:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75270 00:13:12.774 19:11:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:12.774 19:11:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:12.774 19:11:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75270' 00:13:12.774 killing process with pid 75270 00:13:12.774 19:11:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 75270 00:13:12.774 [2024-11-27 19:11:22.373183] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:12.774 19:11:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 75270 00:13:13.343 [2024-11-27 19:11:22.728612] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:14.725 19:11:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.S9JVHBf2BG 00:13:14.725 19:11:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:13:14.725 19:11:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:13:14.725 19:11:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:13:14.725 19:11:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:13:14.725 ************************************ 00:13:14.725 END TEST raid_write_error_test 00:13:14.725 ************************************ 00:13:14.725 19:11:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:14.725 19:11:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:13:14.725 19:11:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:13:14.725 00:13:14.725 real 0m4.852s 00:13:14.725 user 0m5.612s 00:13:14.725 sys 0m0.693s 00:13:14.725 19:11:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:14.725 19:11:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.725 19:11:24 bdev_raid -- bdev/bdev_raid.sh@976 -- # '[' true = true ']' 00:13:14.725 19:11:24 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:13:14.725 19:11:24 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false true 00:13:14.725 19:11:24 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:13:14.725 19:11:24 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:14.725 19:11:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:14.725 ************************************ 00:13:14.725 START TEST raid_rebuild_test 00:13:14.725 ************************************ 00:13:14.725 19:11:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 false false true 00:13:14.725 19:11:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:14.725 19:11:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:13:14.725 19:11:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:13:14.725 19:11:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:13:14.725 19:11:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:14.725 19:11:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:14.725 19:11:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:14.725 19:11:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:14.725 19:11:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:14.725 19:11:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:14.725 19:11:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:14.725 19:11:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:14.725 19:11:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:14.725 19:11:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:13:14.725 19:11:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:14.725 19:11:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:14.725 19:11:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:14.725 19:11:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:14.725 19:11:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:14.725 19:11:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:14.725 19:11:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:14.725 19:11:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:14.725 19:11:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:13:14.725 19:11:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=75414 00:13:14.725 19:11:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:14.725 19:11:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 75414 00:13:14.725 19:11:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 75414 ']' 00:13:14.725 19:11:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:14.725 19:11:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:14.725 19:11:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:14.725 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:14.725 19:11:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:14.725 19:11:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.725 [2024-11-27 19:11:24.204231] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:13:14.725 [2024-11-27 19:11:24.204454] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75414 ] 00:13:14.725 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:14.725 Zero copy mechanism will not be used. 00:13:14.985 [2024-11-27 19:11:24.383906] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:14.985 [2024-11-27 19:11:24.528269] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:15.245 [2024-11-27 19:11:24.768470] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:15.245 [2024-11-27 19:11:24.768650] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:15.504 19:11:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:15.504 19:11:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:13:15.504 19:11:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:15.505 19:11:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:15.505 19:11:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.505 19:11:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.765 BaseBdev1_malloc 00:13:15.765 19:11:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.765 19:11:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:15.765 19:11:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.765 19:11:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.765 [2024-11-27 19:11:25.155608] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:15.765 [2024-11-27 19:11:25.155682] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:15.765 [2024-11-27 19:11:25.155727] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:15.765 [2024-11-27 19:11:25.155742] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:15.765 [2024-11-27 19:11:25.158159] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:15.765 [2024-11-27 19:11:25.158202] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:15.765 BaseBdev1 00:13:15.765 19:11:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.765 19:11:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:15.765 19:11:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:15.765 19:11:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.765 19:11:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.765 BaseBdev2_malloc 00:13:15.765 19:11:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.765 19:11:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:15.765 19:11:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.765 19:11:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.765 [2024-11-27 19:11:25.213559] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:15.765 [2024-11-27 19:11:25.213628] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:15.765 [2024-11-27 19:11:25.213653] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:15.765 [2024-11-27 19:11:25.213667] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:15.765 [2024-11-27 19:11:25.216222] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:15.765 [2024-11-27 19:11:25.216261] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:15.765 BaseBdev2 00:13:15.765 19:11:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.765 19:11:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:15.765 19:11:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.765 19:11:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.765 spare_malloc 00:13:15.765 19:11:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.765 19:11:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:15.765 19:11:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.765 19:11:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.765 spare_delay 00:13:15.765 19:11:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.765 19:11:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:15.765 19:11:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.765 19:11:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.765 [2024-11-27 19:11:25.290478] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:15.765 [2024-11-27 19:11:25.290553] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:15.765 [2024-11-27 19:11:25.290573] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:13:15.765 [2024-11-27 19:11:25.290585] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:15.765 [2024-11-27 19:11:25.293095] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:15.765 [2024-11-27 19:11:25.293192] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:15.765 spare 00:13:15.765 19:11:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.765 19:11:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:13:15.765 19:11:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.765 19:11:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.765 [2024-11-27 19:11:25.298535] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:15.765 [2024-11-27 19:11:25.300599] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:15.765 [2024-11-27 19:11:25.300684] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:15.765 [2024-11-27 19:11:25.300781] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:13:15.765 [2024-11-27 19:11:25.301059] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:13:15.765 [2024-11-27 19:11:25.301288] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:15.765 [2024-11-27 19:11:25.301336] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:15.765 [2024-11-27 19:11:25.301548] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:15.765 19:11:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.765 19:11:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:15.765 19:11:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:15.765 19:11:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:15.765 19:11:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:15.765 19:11:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:15.765 19:11:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:15.765 19:11:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:15.765 19:11:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:15.766 19:11:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:15.766 19:11:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:15.766 19:11:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:15.766 19:11:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:15.766 19:11:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.766 19:11:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.766 19:11:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.766 19:11:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:15.766 "name": "raid_bdev1", 00:13:15.766 "uuid": "eb61ded5-cc65-4b1e-a302-5281415ce235", 00:13:15.766 "strip_size_kb": 0, 00:13:15.766 "state": "online", 00:13:15.766 "raid_level": "raid1", 00:13:15.766 "superblock": false, 00:13:15.766 "num_base_bdevs": 2, 00:13:15.766 "num_base_bdevs_discovered": 2, 00:13:15.766 "num_base_bdevs_operational": 2, 00:13:15.766 "base_bdevs_list": [ 00:13:15.766 { 00:13:15.766 "name": "BaseBdev1", 00:13:15.766 "uuid": "2995096b-8a78-5108-8e18-503d3a82f97a", 00:13:15.766 "is_configured": true, 00:13:15.766 "data_offset": 0, 00:13:15.766 "data_size": 65536 00:13:15.766 }, 00:13:15.766 { 00:13:15.766 "name": "BaseBdev2", 00:13:15.766 "uuid": "df0b8bfa-1439-58c9-a3d5-608b419507e6", 00:13:15.766 "is_configured": true, 00:13:15.766 "data_offset": 0, 00:13:15.766 "data_size": 65536 00:13:15.766 } 00:13:15.766 ] 00:13:15.766 }' 00:13:15.766 19:11:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:15.766 19:11:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.336 19:11:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:16.336 19:11:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.336 19:11:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.336 19:11:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:16.336 [2024-11-27 19:11:25.806008] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:16.336 19:11:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.336 19:11:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:13:16.336 19:11:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:16.336 19:11:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.336 19:11:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.336 19:11:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:16.336 19:11:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.336 19:11:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:13:16.336 19:11:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:13:16.336 19:11:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:13:16.336 19:11:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:13:16.336 19:11:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:13:16.336 19:11:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:16.336 19:11:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:13:16.336 19:11:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:16.336 19:11:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:16.336 19:11:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:16.336 19:11:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:13:16.336 19:11:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:16.336 19:11:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:16.336 19:11:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:13:16.597 [2024-11-27 19:11:26.097290] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:13:16.597 /dev/nbd0 00:13:16.597 19:11:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:16.597 19:11:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:16.597 19:11:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:16.597 19:11:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:13:16.597 19:11:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:16.597 19:11:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:16.597 19:11:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:16.597 19:11:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:13:16.597 19:11:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:16.597 19:11:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:16.597 19:11:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:16.597 1+0 records in 00:13:16.597 1+0 records out 00:13:16.597 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000429313 s, 9.5 MB/s 00:13:16.597 19:11:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:16.597 19:11:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:13:16.597 19:11:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:16.597 19:11:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:16.597 19:11:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:13:16.597 19:11:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:16.597 19:11:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:16.597 19:11:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:13:16.597 19:11:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:13:16.597 19:11:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:13:21.872 65536+0 records in 00:13:21.872 65536+0 records out 00:13:21.872 33554432 bytes (34 MB, 32 MiB) copied, 4.95613 s, 6.8 MB/s 00:13:21.872 19:11:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:21.872 19:11:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:21.872 19:11:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:21.872 19:11:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:21.872 19:11:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:13:21.872 19:11:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:21.872 19:11:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:21.872 [2024-11-27 19:11:31.380084] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:21.872 19:11:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:21.872 19:11:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:21.872 19:11:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:21.872 19:11:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:21.872 19:11:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:21.872 19:11:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:21.873 19:11:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:13:21.873 19:11:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:13:21.873 19:11:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:21.873 19:11:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.873 19:11:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.873 [2024-11-27 19:11:31.412141] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:21.873 19:11:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.873 19:11:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:21.873 19:11:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:21.873 19:11:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:21.873 19:11:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:21.873 19:11:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:21.873 19:11:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:21.873 19:11:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:21.873 19:11:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:21.873 19:11:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:21.873 19:11:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:21.873 19:11:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:21.873 19:11:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:21.873 19:11:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.873 19:11:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.873 19:11:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.873 19:11:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:21.873 "name": "raid_bdev1", 00:13:21.873 "uuid": "eb61ded5-cc65-4b1e-a302-5281415ce235", 00:13:21.873 "strip_size_kb": 0, 00:13:21.873 "state": "online", 00:13:21.873 "raid_level": "raid1", 00:13:21.873 "superblock": false, 00:13:21.873 "num_base_bdevs": 2, 00:13:21.873 "num_base_bdevs_discovered": 1, 00:13:21.873 "num_base_bdevs_operational": 1, 00:13:21.873 "base_bdevs_list": [ 00:13:21.873 { 00:13:21.873 "name": null, 00:13:21.873 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:21.873 "is_configured": false, 00:13:21.873 "data_offset": 0, 00:13:21.873 "data_size": 65536 00:13:21.873 }, 00:13:21.873 { 00:13:21.873 "name": "BaseBdev2", 00:13:21.873 "uuid": "df0b8bfa-1439-58c9-a3d5-608b419507e6", 00:13:21.873 "is_configured": true, 00:13:21.873 "data_offset": 0, 00:13:21.873 "data_size": 65536 00:13:21.873 } 00:13:21.873 ] 00:13:21.873 }' 00:13:21.873 19:11:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:21.873 19:11:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.442 19:11:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:22.442 19:11:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.442 19:11:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.442 [2024-11-27 19:11:31.843584] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:22.442 [2024-11-27 19:11:31.862869] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09bd0 00:13:22.442 19:11:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.442 19:11:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:22.442 [2024-11-27 19:11:31.865124] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:23.381 19:11:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:23.381 19:11:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:23.381 19:11:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:23.381 19:11:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:23.381 19:11:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:23.381 19:11:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:23.381 19:11:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:23.381 19:11:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.381 19:11:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.381 19:11:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.381 19:11:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:23.381 "name": "raid_bdev1", 00:13:23.381 "uuid": "eb61ded5-cc65-4b1e-a302-5281415ce235", 00:13:23.381 "strip_size_kb": 0, 00:13:23.381 "state": "online", 00:13:23.381 "raid_level": "raid1", 00:13:23.381 "superblock": false, 00:13:23.381 "num_base_bdevs": 2, 00:13:23.381 "num_base_bdevs_discovered": 2, 00:13:23.381 "num_base_bdevs_operational": 2, 00:13:23.381 "process": { 00:13:23.381 "type": "rebuild", 00:13:23.381 "target": "spare", 00:13:23.381 "progress": { 00:13:23.381 "blocks": 20480, 00:13:23.381 "percent": 31 00:13:23.381 } 00:13:23.381 }, 00:13:23.381 "base_bdevs_list": [ 00:13:23.381 { 00:13:23.381 "name": "spare", 00:13:23.381 "uuid": "ead88afc-0d4b-5fd7-9855-4c727dcd0d17", 00:13:23.381 "is_configured": true, 00:13:23.381 "data_offset": 0, 00:13:23.381 "data_size": 65536 00:13:23.381 }, 00:13:23.381 { 00:13:23.381 "name": "BaseBdev2", 00:13:23.381 "uuid": "df0b8bfa-1439-58c9-a3d5-608b419507e6", 00:13:23.381 "is_configured": true, 00:13:23.381 "data_offset": 0, 00:13:23.381 "data_size": 65536 00:13:23.381 } 00:13:23.381 ] 00:13:23.381 }' 00:13:23.381 19:11:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:23.381 19:11:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:23.381 19:11:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:23.642 19:11:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:23.642 19:11:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:23.642 19:11:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.642 19:11:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.642 [2024-11-27 19:11:33.028043] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:23.642 [2024-11-27 19:11:33.070632] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:23.642 [2024-11-27 19:11:33.070765] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:23.642 [2024-11-27 19:11:33.070782] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:23.642 [2024-11-27 19:11:33.070792] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:23.642 19:11:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.642 19:11:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:23.642 19:11:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:23.642 19:11:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:23.642 19:11:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:23.642 19:11:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:23.642 19:11:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:23.642 19:11:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:23.642 19:11:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:23.642 19:11:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:23.642 19:11:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:23.642 19:11:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:23.642 19:11:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.642 19:11:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.642 19:11:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:23.642 19:11:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.642 19:11:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:23.642 "name": "raid_bdev1", 00:13:23.642 "uuid": "eb61ded5-cc65-4b1e-a302-5281415ce235", 00:13:23.642 "strip_size_kb": 0, 00:13:23.642 "state": "online", 00:13:23.642 "raid_level": "raid1", 00:13:23.642 "superblock": false, 00:13:23.642 "num_base_bdevs": 2, 00:13:23.642 "num_base_bdevs_discovered": 1, 00:13:23.642 "num_base_bdevs_operational": 1, 00:13:23.642 "base_bdevs_list": [ 00:13:23.642 { 00:13:23.642 "name": null, 00:13:23.642 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:23.642 "is_configured": false, 00:13:23.642 "data_offset": 0, 00:13:23.642 "data_size": 65536 00:13:23.642 }, 00:13:23.642 { 00:13:23.642 "name": "BaseBdev2", 00:13:23.642 "uuid": "df0b8bfa-1439-58c9-a3d5-608b419507e6", 00:13:23.642 "is_configured": true, 00:13:23.642 "data_offset": 0, 00:13:23.642 "data_size": 65536 00:13:23.642 } 00:13:23.642 ] 00:13:23.642 }' 00:13:23.642 19:11:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:23.642 19:11:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.212 19:11:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:24.212 19:11:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:24.212 19:11:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:24.212 19:11:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:24.212 19:11:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:24.212 19:11:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:24.212 19:11:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:24.212 19:11:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.212 19:11:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.212 19:11:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.212 19:11:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:24.212 "name": "raid_bdev1", 00:13:24.212 "uuid": "eb61ded5-cc65-4b1e-a302-5281415ce235", 00:13:24.212 "strip_size_kb": 0, 00:13:24.212 "state": "online", 00:13:24.212 "raid_level": "raid1", 00:13:24.212 "superblock": false, 00:13:24.212 "num_base_bdevs": 2, 00:13:24.212 "num_base_bdevs_discovered": 1, 00:13:24.212 "num_base_bdevs_operational": 1, 00:13:24.212 "base_bdevs_list": [ 00:13:24.212 { 00:13:24.212 "name": null, 00:13:24.212 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:24.212 "is_configured": false, 00:13:24.212 "data_offset": 0, 00:13:24.212 "data_size": 65536 00:13:24.212 }, 00:13:24.212 { 00:13:24.212 "name": "BaseBdev2", 00:13:24.212 "uuid": "df0b8bfa-1439-58c9-a3d5-608b419507e6", 00:13:24.212 "is_configured": true, 00:13:24.212 "data_offset": 0, 00:13:24.212 "data_size": 65536 00:13:24.212 } 00:13:24.212 ] 00:13:24.212 }' 00:13:24.212 19:11:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:24.212 19:11:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:24.212 19:11:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:24.212 19:11:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:24.212 19:11:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:24.212 19:11:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.212 19:11:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.212 [2024-11-27 19:11:33.675758] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:24.212 [2024-11-27 19:11:33.691562] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09ca0 00:13:24.212 19:11:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.212 19:11:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:24.212 [2024-11-27 19:11:33.693305] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:25.151 19:11:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:25.151 19:11:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:25.151 19:11:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:25.151 19:11:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:25.152 19:11:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:25.152 19:11:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:25.152 19:11:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:25.152 19:11:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.152 19:11:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.152 19:11:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.152 19:11:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:25.152 "name": "raid_bdev1", 00:13:25.152 "uuid": "eb61ded5-cc65-4b1e-a302-5281415ce235", 00:13:25.152 "strip_size_kb": 0, 00:13:25.152 "state": "online", 00:13:25.152 "raid_level": "raid1", 00:13:25.152 "superblock": false, 00:13:25.152 "num_base_bdevs": 2, 00:13:25.152 "num_base_bdevs_discovered": 2, 00:13:25.152 "num_base_bdevs_operational": 2, 00:13:25.152 "process": { 00:13:25.152 "type": "rebuild", 00:13:25.152 "target": "spare", 00:13:25.152 "progress": { 00:13:25.152 "blocks": 20480, 00:13:25.152 "percent": 31 00:13:25.152 } 00:13:25.152 }, 00:13:25.152 "base_bdevs_list": [ 00:13:25.152 { 00:13:25.152 "name": "spare", 00:13:25.152 "uuid": "ead88afc-0d4b-5fd7-9855-4c727dcd0d17", 00:13:25.152 "is_configured": true, 00:13:25.152 "data_offset": 0, 00:13:25.152 "data_size": 65536 00:13:25.152 }, 00:13:25.152 { 00:13:25.152 "name": "BaseBdev2", 00:13:25.152 "uuid": "df0b8bfa-1439-58c9-a3d5-608b419507e6", 00:13:25.152 "is_configured": true, 00:13:25.152 "data_offset": 0, 00:13:25.152 "data_size": 65536 00:13:25.152 } 00:13:25.152 ] 00:13:25.152 }' 00:13:25.152 19:11:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:25.412 19:11:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:25.412 19:11:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:25.412 19:11:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:25.412 19:11:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:13:25.412 19:11:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:13:25.412 19:11:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:25.412 19:11:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:13:25.412 19:11:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=376 00:13:25.412 19:11:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:25.412 19:11:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:25.412 19:11:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:25.412 19:11:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:25.412 19:11:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:25.412 19:11:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:25.412 19:11:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:25.412 19:11:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.412 19:11:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:25.412 19:11:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.412 19:11:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.412 19:11:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:25.412 "name": "raid_bdev1", 00:13:25.412 "uuid": "eb61ded5-cc65-4b1e-a302-5281415ce235", 00:13:25.412 "strip_size_kb": 0, 00:13:25.412 "state": "online", 00:13:25.412 "raid_level": "raid1", 00:13:25.412 "superblock": false, 00:13:25.412 "num_base_bdevs": 2, 00:13:25.412 "num_base_bdevs_discovered": 2, 00:13:25.412 "num_base_bdevs_operational": 2, 00:13:25.412 "process": { 00:13:25.412 "type": "rebuild", 00:13:25.412 "target": "spare", 00:13:25.412 "progress": { 00:13:25.412 "blocks": 22528, 00:13:25.412 "percent": 34 00:13:25.412 } 00:13:25.412 }, 00:13:25.412 "base_bdevs_list": [ 00:13:25.412 { 00:13:25.412 "name": "spare", 00:13:25.412 "uuid": "ead88afc-0d4b-5fd7-9855-4c727dcd0d17", 00:13:25.412 "is_configured": true, 00:13:25.412 "data_offset": 0, 00:13:25.412 "data_size": 65536 00:13:25.412 }, 00:13:25.412 { 00:13:25.412 "name": "BaseBdev2", 00:13:25.412 "uuid": "df0b8bfa-1439-58c9-a3d5-608b419507e6", 00:13:25.412 "is_configured": true, 00:13:25.412 "data_offset": 0, 00:13:25.412 "data_size": 65536 00:13:25.412 } 00:13:25.412 ] 00:13:25.412 }' 00:13:25.412 19:11:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:25.412 19:11:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:25.412 19:11:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:25.412 19:11:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:25.412 19:11:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:26.794 19:11:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:26.794 19:11:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:26.794 19:11:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:26.794 19:11:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:26.794 19:11:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:26.794 19:11:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:26.794 19:11:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:26.794 19:11:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.794 19:11:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:26.794 19:11:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.794 19:11:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.794 19:11:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:26.794 "name": "raid_bdev1", 00:13:26.794 "uuid": "eb61ded5-cc65-4b1e-a302-5281415ce235", 00:13:26.794 "strip_size_kb": 0, 00:13:26.794 "state": "online", 00:13:26.794 "raid_level": "raid1", 00:13:26.794 "superblock": false, 00:13:26.794 "num_base_bdevs": 2, 00:13:26.794 "num_base_bdevs_discovered": 2, 00:13:26.794 "num_base_bdevs_operational": 2, 00:13:26.794 "process": { 00:13:26.794 "type": "rebuild", 00:13:26.794 "target": "spare", 00:13:26.794 "progress": { 00:13:26.794 "blocks": 47104, 00:13:26.794 "percent": 71 00:13:26.794 } 00:13:26.794 }, 00:13:26.794 "base_bdevs_list": [ 00:13:26.794 { 00:13:26.794 "name": "spare", 00:13:26.794 "uuid": "ead88afc-0d4b-5fd7-9855-4c727dcd0d17", 00:13:26.794 "is_configured": true, 00:13:26.794 "data_offset": 0, 00:13:26.794 "data_size": 65536 00:13:26.794 }, 00:13:26.794 { 00:13:26.794 "name": "BaseBdev2", 00:13:26.794 "uuid": "df0b8bfa-1439-58c9-a3d5-608b419507e6", 00:13:26.794 "is_configured": true, 00:13:26.794 "data_offset": 0, 00:13:26.794 "data_size": 65536 00:13:26.794 } 00:13:26.794 ] 00:13:26.794 }' 00:13:26.794 19:11:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:26.794 19:11:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:26.794 19:11:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:26.794 19:11:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:26.794 19:11:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:27.364 [2024-11-27 19:11:36.905775] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:27.364 [2024-11-27 19:11:36.905864] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:27.364 [2024-11-27 19:11:36.905917] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:27.623 19:11:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:27.623 19:11:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:27.623 19:11:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:27.623 19:11:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:27.623 19:11:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:27.623 19:11:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:27.623 19:11:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:27.623 19:11:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:27.623 19:11:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.623 19:11:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.623 19:11:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.623 19:11:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:27.623 "name": "raid_bdev1", 00:13:27.623 "uuid": "eb61ded5-cc65-4b1e-a302-5281415ce235", 00:13:27.623 "strip_size_kb": 0, 00:13:27.623 "state": "online", 00:13:27.623 "raid_level": "raid1", 00:13:27.623 "superblock": false, 00:13:27.623 "num_base_bdevs": 2, 00:13:27.623 "num_base_bdevs_discovered": 2, 00:13:27.623 "num_base_bdevs_operational": 2, 00:13:27.623 "base_bdevs_list": [ 00:13:27.623 { 00:13:27.623 "name": "spare", 00:13:27.623 "uuid": "ead88afc-0d4b-5fd7-9855-4c727dcd0d17", 00:13:27.623 "is_configured": true, 00:13:27.623 "data_offset": 0, 00:13:27.623 "data_size": 65536 00:13:27.623 }, 00:13:27.623 { 00:13:27.623 "name": "BaseBdev2", 00:13:27.623 "uuid": "df0b8bfa-1439-58c9-a3d5-608b419507e6", 00:13:27.623 "is_configured": true, 00:13:27.623 "data_offset": 0, 00:13:27.623 "data_size": 65536 00:13:27.623 } 00:13:27.623 ] 00:13:27.623 }' 00:13:27.623 19:11:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:27.623 19:11:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:27.623 19:11:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:27.882 19:11:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:27.882 19:11:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:13:27.882 19:11:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:27.882 19:11:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:27.882 19:11:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:27.882 19:11:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:27.882 19:11:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:27.882 19:11:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:27.882 19:11:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.882 19:11:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.882 19:11:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:27.882 19:11:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.882 19:11:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:27.882 "name": "raid_bdev1", 00:13:27.882 "uuid": "eb61ded5-cc65-4b1e-a302-5281415ce235", 00:13:27.882 "strip_size_kb": 0, 00:13:27.882 "state": "online", 00:13:27.882 "raid_level": "raid1", 00:13:27.882 "superblock": false, 00:13:27.882 "num_base_bdevs": 2, 00:13:27.882 "num_base_bdevs_discovered": 2, 00:13:27.882 "num_base_bdevs_operational": 2, 00:13:27.882 "base_bdevs_list": [ 00:13:27.882 { 00:13:27.882 "name": "spare", 00:13:27.882 "uuid": "ead88afc-0d4b-5fd7-9855-4c727dcd0d17", 00:13:27.882 "is_configured": true, 00:13:27.882 "data_offset": 0, 00:13:27.882 "data_size": 65536 00:13:27.882 }, 00:13:27.882 { 00:13:27.882 "name": "BaseBdev2", 00:13:27.882 "uuid": "df0b8bfa-1439-58c9-a3d5-608b419507e6", 00:13:27.882 "is_configured": true, 00:13:27.882 "data_offset": 0, 00:13:27.882 "data_size": 65536 00:13:27.882 } 00:13:27.882 ] 00:13:27.882 }' 00:13:27.882 19:11:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:27.882 19:11:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:27.882 19:11:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:27.882 19:11:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:27.882 19:11:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:27.882 19:11:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:27.882 19:11:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:27.883 19:11:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:27.883 19:11:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:27.883 19:11:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:27.883 19:11:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:27.883 19:11:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:27.883 19:11:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:27.883 19:11:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:27.883 19:11:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:27.883 19:11:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.883 19:11:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.883 19:11:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:27.883 19:11:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.883 19:11:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:27.883 "name": "raid_bdev1", 00:13:27.883 "uuid": "eb61ded5-cc65-4b1e-a302-5281415ce235", 00:13:27.883 "strip_size_kb": 0, 00:13:27.883 "state": "online", 00:13:27.883 "raid_level": "raid1", 00:13:27.883 "superblock": false, 00:13:27.883 "num_base_bdevs": 2, 00:13:27.883 "num_base_bdevs_discovered": 2, 00:13:27.883 "num_base_bdevs_operational": 2, 00:13:27.883 "base_bdevs_list": [ 00:13:27.883 { 00:13:27.883 "name": "spare", 00:13:27.883 "uuid": "ead88afc-0d4b-5fd7-9855-4c727dcd0d17", 00:13:27.883 "is_configured": true, 00:13:27.883 "data_offset": 0, 00:13:27.883 "data_size": 65536 00:13:27.883 }, 00:13:27.883 { 00:13:27.883 "name": "BaseBdev2", 00:13:27.883 "uuid": "df0b8bfa-1439-58c9-a3d5-608b419507e6", 00:13:27.883 "is_configured": true, 00:13:27.883 "data_offset": 0, 00:13:27.883 "data_size": 65536 00:13:27.883 } 00:13:27.883 ] 00:13:27.883 }' 00:13:27.883 19:11:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:27.883 19:11:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.451 19:11:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:28.451 19:11:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.451 19:11:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.451 [2024-11-27 19:11:37.852544] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:28.451 [2024-11-27 19:11:37.852643] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:28.451 [2024-11-27 19:11:37.852772] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:28.451 [2024-11-27 19:11:37.852873] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:28.451 [2024-11-27 19:11:37.852922] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:28.451 19:11:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.451 19:11:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:28.451 19:11:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:13:28.451 19:11:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.451 19:11:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.451 19:11:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.451 19:11:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:28.451 19:11:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:28.451 19:11:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:13:28.451 19:11:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:13:28.451 19:11:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:28.451 19:11:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:13:28.451 19:11:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:28.451 19:11:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:28.451 19:11:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:28.451 19:11:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:13:28.451 19:11:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:28.451 19:11:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:28.451 19:11:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:13:28.711 /dev/nbd0 00:13:28.711 19:11:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:28.711 19:11:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:28.711 19:11:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:28.711 19:11:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:13:28.711 19:11:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:28.711 19:11:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:28.711 19:11:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:28.711 19:11:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:13:28.711 19:11:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:28.711 19:11:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:28.711 19:11:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:28.711 1+0 records in 00:13:28.711 1+0 records out 00:13:28.711 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00025572 s, 16.0 MB/s 00:13:28.711 19:11:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:28.711 19:11:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:13:28.711 19:11:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:28.711 19:11:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:28.711 19:11:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:13:28.711 19:11:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:28.711 19:11:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:28.711 19:11:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:13:28.971 /dev/nbd1 00:13:28.971 19:11:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:28.971 19:11:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:28.971 19:11:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:28.971 19:11:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:13:28.971 19:11:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:28.971 19:11:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:28.971 19:11:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:28.971 19:11:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:13:28.971 19:11:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:28.971 19:11:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:28.971 19:11:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:28.971 1+0 records in 00:13:28.971 1+0 records out 00:13:28.971 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000420566 s, 9.7 MB/s 00:13:28.971 19:11:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:28.971 19:11:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:13:28.971 19:11:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:28.971 19:11:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:28.971 19:11:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:13:28.971 19:11:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:28.971 19:11:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:28.971 19:11:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:13:29.230 19:11:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:13:29.230 19:11:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:29.230 19:11:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:29.230 19:11:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:29.230 19:11:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:13:29.230 19:11:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:29.230 19:11:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:29.230 19:11:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:29.230 19:11:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:29.230 19:11:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:29.230 19:11:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:29.230 19:11:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:29.230 19:11:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:29.230 19:11:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:13:29.230 19:11:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:13:29.230 19:11:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:29.230 19:11:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:29.489 19:11:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:29.489 19:11:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:29.489 19:11:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:29.489 19:11:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:29.489 19:11:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:29.489 19:11:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:29.489 19:11:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:13:29.489 19:11:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:13:29.489 19:11:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:13:29.489 19:11:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 75414 00:13:29.489 19:11:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 75414 ']' 00:13:29.489 19:11:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 75414 00:13:29.489 19:11:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:13:29.489 19:11:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:29.489 19:11:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75414 00:13:29.489 19:11:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:29.489 19:11:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:29.489 19:11:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75414' 00:13:29.489 killing process with pid 75414 00:13:29.489 19:11:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@973 -- # kill 75414 00:13:29.489 Received shutdown signal, test time was about 60.000000 seconds 00:13:29.489 00:13:29.489 Latency(us) 00:13:29.489 [2024-11-27T19:11:39.125Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:29.489 [2024-11-27T19:11:39.125Z] =================================================================================================================== 00:13:29.489 [2024-11-27T19:11:39.125Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:29.489 [2024-11-27 19:11:39.103494] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:29.489 19:11:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@978 -- # wait 75414 00:13:30.064 [2024-11-27 19:11:39.397704] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:31.003 19:11:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:13:31.003 00:13:31.003 real 0m16.406s 00:13:31.003 user 0m18.040s 00:13:31.003 sys 0m3.620s 00:13:31.003 ************************************ 00:13:31.003 END TEST raid_rebuild_test 00:13:31.003 ************************************ 00:13:31.003 19:11:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:31.003 19:11:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:31.003 19:11:40 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false true 00:13:31.003 19:11:40 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:13:31.003 19:11:40 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:31.003 19:11:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:31.003 ************************************ 00:13:31.003 START TEST raid_rebuild_test_sb 00:13:31.003 ************************************ 00:13:31.003 19:11:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:13:31.003 19:11:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:31.003 19:11:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:13:31.003 19:11:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:13:31.003 19:11:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:13:31.003 19:11:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:31.003 19:11:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:31.003 19:11:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:31.003 19:11:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:31.003 19:11:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:31.003 19:11:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:31.003 19:11:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:31.003 19:11:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:31.003 19:11:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:31.003 19:11:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:13:31.003 19:11:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:31.003 19:11:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:31.003 19:11:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:31.003 19:11:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:31.003 19:11:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:31.003 19:11:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:31.003 19:11:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:31.003 19:11:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:31.003 19:11:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:13:31.003 19:11:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:13:31.003 19:11:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=75843 00:13:31.003 19:11:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:31.003 19:11:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 75843 00:13:31.003 19:11:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 75843 ']' 00:13:31.003 19:11:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:31.003 19:11:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:31.003 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:31.003 19:11:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:31.003 19:11:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:31.003 19:11:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.263 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:31.263 Zero copy mechanism will not be used. 00:13:31.263 [2024-11-27 19:11:40.676309] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:13:31.263 [2024-11-27 19:11:40.676427] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75843 ] 00:13:31.263 [2024-11-27 19:11:40.850393] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:31.522 [2024-11-27 19:11:40.960780] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:31.522 [2024-11-27 19:11:41.149685] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:31.522 [2024-11-27 19:11:41.149734] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:32.092 19:11:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:32.092 19:11:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:13:32.092 19:11:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:32.092 19:11:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:32.092 19:11:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.092 19:11:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.092 BaseBdev1_malloc 00:13:32.092 19:11:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.092 19:11:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:32.092 19:11:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.092 19:11:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.092 [2024-11-27 19:11:41.557556] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:32.092 [2024-11-27 19:11:41.557627] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:32.092 [2024-11-27 19:11:41.557666] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:32.092 [2024-11-27 19:11:41.557677] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:32.092 [2024-11-27 19:11:41.559727] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:32.092 [2024-11-27 19:11:41.559853] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:32.092 BaseBdev1 00:13:32.092 19:11:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.092 19:11:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:32.092 19:11:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:32.092 19:11:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.092 19:11:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.092 BaseBdev2_malloc 00:13:32.092 19:11:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.092 19:11:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:32.092 19:11:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.092 19:11:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.092 [2024-11-27 19:11:41.610240] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:32.092 [2024-11-27 19:11:41.610370] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:32.092 [2024-11-27 19:11:41.610394] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:32.092 [2024-11-27 19:11:41.610405] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:32.092 [2024-11-27 19:11:41.612419] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:32.092 [2024-11-27 19:11:41.612459] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:32.092 BaseBdev2 00:13:32.093 19:11:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.093 19:11:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:32.093 19:11:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.093 19:11:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.093 spare_malloc 00:13:32.093 19:11:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.093 19:11:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:32.093 19:11:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.093 19:11:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.093 spare_delay 00:13:32.093 19:11:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.093 19:11:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:32.093 19:11:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.093 19:11:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.093 [2024-11-27 19:11:41.712944] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:32.093 [2024-11-27 19:11:41.713072] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:32.093 [2024-11-27 19:11:41.713096] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:13:32.093 [2024-11-27 19:11:41.713107] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:32.093 [2024-11-27 19:11:41.715079] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:32.093 [2024-11-27 19:11:41.715122] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:32.093 spare 00:13:32.093 19:11:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.093 19:11:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:13:32.093 19:11:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.093 19:11:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.093 [2024-11-27 19:11:41.724978] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:32.353 [2024-11-27 19:11:41.726664] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:32.353 [2024-11-27 19:11:41.726849] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:32.353 [2024-11-27 19:11:41.726865] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:32.353 [2024-11-27 19:11:41.727088] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:13:32.353 [2024-11-27 19:11:41.727246] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:32.353 [2024-11-27 19:11:41.727255] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:32.353 [2024-11-27 19:11:41.727401] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:32.353 19:11:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.353 19:11:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:32.353 19:11:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:32.353 19:11:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:32.353 19:11:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:32.353 19:11:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:32.353 19:11:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:32.353 19:11:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:32.353 19:11:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:32.353 19:11:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:32.353 19:11:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:32.353 19:11:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:32.353 19:11:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:32.353 19:11:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.353 19:11:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.353 19:11:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.353 19:11:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:32.353 "name": "raid_bdev1", 00:13:32.353 "uuid": "dbeea84c-c4a3-47b7-8c50-c120e8cb6028", 00:13:32.353 "strip_size_kb": 0, 00:13:32.353 "state": "online", 00:13:32.353 "raid_level": "raid1", 00:13:32.353 "superblock": true, 00:13:32.353 "num_base_bdevs": 2, 00:13:32.353 "num_base_bdevs_discovered": 2, 00:13:32.353 "num_base_bdevs_operational": 2, 00:13:32.353 "base_bdevs_list": [ 00:13:32.353 { 00:13:32.353 "name": "BaseBdev1", 00:13:32.353 "uuid": "0cceb20a-5263-5811-9bda-f42be2b7574c", 00:13:32.353 "is_configured": true, 00:13:32.353 "data_offset": 2048, 00:13:32.353 "data_size": 63488 00:13:32.353 }, 00:13:32.353 { 00:13:32.353 "name": "BaseBdev2", 00:13:32.353 "uuid": "b3223abc-5be4-5ee7-b23e-1152bc2a20c4", 00:13:32.353 "is_configured": true, 00:13:32.353 "data_offset": 2048, 00:13:32.353 "data_size": 63488 00:13:32.353 } 00:13:32.353 ] 00:13:32.353 }' 00:13:32.353 19:11:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:32.353 19:11:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.614 19:11:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:32.614 19:11:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:32.614 19:11:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.614 19:11:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.614 [2024-11-27 19:11:42.120582] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:32.614 19:11:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.614 19:11:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:13:32.614 19:11:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:32.614 19:11:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.614 19:11:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.614 19:11:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:32.614 19:11:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.614 19:11:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:13:32.614 19:11:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:13:32.614 19:11:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:13:32.614 19:11:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:13:32.614 19:11:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:13:32.614 19:11:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:32.614 19:11:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:13:32.614 19:11:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:32.614 19:11:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:32.614 19:11:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:32.614 19:11:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:13:32.614 19:11:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:32.614 19:11:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:32.614 19:11:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:13:32.874 [2024-11-27 19:11:42.371928] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:13:32.874 /dev/nbd0 00:13:32.874 19:11:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:32.874 19:11:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:32.874 19:11:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:32.875 19:11:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:13:32.875 19:11:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:32.875 19:11:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:32.875 19:11:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:32.875 19:11:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:13:32.875 19:11:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:32.875 19:11:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:32.875 19:11:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:32.875 1+0 records in 00:13:32.875 1+0 records out 00:13:32.875 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000274066 s, 14.9 MB/s 00:13:32.875 19:11:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:32.875 19:11:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:13:32.875 19:11:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:32.875 19:11:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:32.875 19:11:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:13:32.875 19:11:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:32.875 19:11:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:32.875 19:11:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:13:32.875 19:11:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:13:32.875 19:11:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:13:38.156 63488+0 records in 00:13:38.156 63488+0 records out 00:13:38.156 32505856 bytes (33 MB, 31 MiB) copied, 4.32623 s, 7.5 MB/s 00:13:38.156 19:11:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:38.156 19:11:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:38.156 19:11:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:38.156 19:11:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:38.156 19:11:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:13:38.156 19:11:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:38.156 19:11:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:38.156 [2024-11-27 19:11:46.965555] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:38.156 19:11:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:38.156 19:11:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:38.156 19:11:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:38.156 19:11:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:38.156 19:11:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:38.156 19:11:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:38.156 19:11:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:13:38.156 19:11:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:13:38.156 19:11:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:38.156 19:11:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.156 19:11:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:38.156 [2024-11-27 19:11:47.001577] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:38.156 19:11:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.156 19:11:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:38.156 19:11:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:38.157 19:11:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:38.157 19:11:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:38.157 19:11:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:38.157 19:11:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:38.157 19:11:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:38.157 19:11:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:38.157 19:11:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:38.157 19:11:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:38.157 19:11:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:38.157 19:11:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:38.157 19:11:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.157 19:11:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:38.157 19:11:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.157 19:11:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:38.157 "name": "raid_bdev1", 00:13:38.157 "uuid": "dbeea84c-c4a3-47b7-8c50-c120e8cb6028", 00:13:38.157 "strip_size_kb": 0, 00:13:38.157 "state": "online", 00:13:38.157 "raid_level": "raid1", 00:13:38.157 "superblock": true, 00:13:38.157 "num_base_bdevs": 2, 00:13:38.157 "num_base_bdevs_discovered": 1, 00:13:38.157 "num_base_bdevs_operational": 1, 00:13:38.157 "base_bdevs_list": [ 00:13:38.157 { 00:13:38.157 "name": null, 00:13:38.157 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:38.157 "is_configured": false, 00:13:38.157 "data_offset": 0, 00:13:38.157 "data_size": 63488 00:13:38.157 }, 00:13:38.157 { 00:13:38.157 "name": "BaseBdev2", 00:13:38.157 "uuid": "b3223abc-5be4-5ee7-b23e-1152bc2a20c4", 00:13:38.157 "is_configured": true, 00:13:38.157 "data_offset": 2048, 00:13:38.157 "data_size": 63488 00:13:38.157 } 00:13:38.157 ] 00:13:38.157 }' 00:13:38.157 19:11:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:38.157 19:11:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:38.157 19:11:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:38.157 19:11:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.157 19:11:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:38.157 [2024-11-27 19:11:47.476785] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:38.157 [2024-11-27 19:11:47.494023] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3360 00:13:38.157 [2024-11-27 19:11:47.495906] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:38.157 19:11:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.157 19:11:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:39.097 19:11:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:39.097 19:11:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:39.097 19:11:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:39.097 19:11:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:39.097 19:11:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:39.097 19:11:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:39.098 19:11:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:39.098 19:11:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.098 19:11:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:39.098 19:11:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.098 19:11:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:39.098 "name": "raid_bdev1", 00:13:39.098 "uuid": "dbeea84c-c4a3-47b7-8c50-c120e8cb6028", 00:13:39.098 "strip_size_kb": 0, 00:13:39.098 "state": "online", 00:13:39.098 "raid_level": "raid1", 00:13:39.098 "superblock": true, 00:13:39.098 "num_base_bdevs": 2, 00:13:39.098 "num_base_bdevs_discovered": 2, 00:13:39.098 "num_base_bdevs_operational": 2, 00:13:39.098 "process": { 00:13:39.098 "type": "rebuild", 00:13:39.098 "target": "spare", 00:13:39.098 "progress": { 00:13:39.098 "blocks": 20480, 00:13:39.098 "percent": 32 00:13:39.098 } 00:13:39.098 }, 00:13:39.098 "base_bdevs_list": [ 00:13:39.098 { 00:13:39.098 "name": "spare", 00:13:39.098 "uuid": "7651f4a3-32d7-50d3-8b29-98c19573e1bc", 00:13:39.098 "is_configured": true, 00:13:39.098 "data_offset": 2048, 00:13:39.098 "data_size": 63488 00:13:39.098 }, 00:13:39.098 { 00:13:39.098 "name": "BaseBdev2", 00:13:39.098 "uuid": "b3223abc-5be4-5ee7-b23e-1152bc2a20c4", 00:13:39.098 "is_configured": true, 00:13:39.098 "data_offset": 2048, 00:13:39.098 "data_size": 63488 00:13:39.098 } 00:13:39.098 ] 00:13:39.098 }' 00:13:39.098 19:11:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:39.098 19:11:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:39.098 19:11:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:39.098 19:11:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:39.098 19:11:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:39.098 19:11:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.098 19:11:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:39.098 [2024-11-27 19:11:48.639485] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:39.098 [2024-11-27 19:11:48.700993] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:39.098 [2024-11-27 19:11:48.701056] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:39.098 [2024-11-27 19:11:48.701072] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:39.098 [2024-11-27 19:11:48.701085] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:39.358 19:11:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.358 19:11:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:39.358 19:11:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:39.358 19:11:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:39.358 19:11:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:39.358 19:11:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:39.358 19:11:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:39.358 19:11:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:39.358 19:11:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:39.358 19:11:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:39.358 19:11:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:39.358 19:11:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:39.358 19:11:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:39.358 19:11:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.358 19:11:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:39.358 19:11:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.358 19:11:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:39.358 "name": "raid_bdev1", 00:13:39.358 "uuid": "dbeea84c-c4a3-47b7-8c50-c120e8cb6028", 00:13:39.358 "strip_size_kb": 0, 00:13:39.358 "state": "online", 00:13:39.358 "raid_level": "raid1", 00:13:39.358 "superblock": true, 00:13:39.358 "num_base_bdevs": 2, 00:13:39.358 "num_base_bdevs_discovered": 1, 00:13:39.359 "num_base_bdevs_operational": 1, 00:13:39.359 "base_bdevs_list": [ 00:13:39.359 { 00:13:39.359 "name": null, 00:13:39.359 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:39.359 "is_configured": false, 00:13:39.359 "data_offset": 0, 00:13:39.359 "data_size": 63488 00:13:39.359 }, 00:13:39.359 { 00:13:39.359 "name": "BaseBdev2", 00:13:39.359 "uuid": "b3223abc-5be4-5ee7-b23e-1152bc2a20c4", 00:13:39.359 "is_configured": true, 00:13:39.359 "data_offset": 2048, 00:13:39.359 "data_size": 63488 00:13:39.359 } 00:13:39.359 ] 00:13:39.359 }' 00:13:39.359 19:11:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:39.359 19:11:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:39.619 19:11:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:39.619 19:11:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:39.619 19:11:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:39.619 19:11:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:39.619 19:11:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:39.619 19:11:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:39.619 19:11:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:39.619 19:11:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.619 19:11:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:39.619 19:11:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.619 19:11:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:39.619 "name": "raid_bdev1", 00:13:39.619 "uuid": "dbeea84c-c4a3-47b7-8c50-c120e8cb6028", 00:13:39.619 "strip_size_kb": 0, 00:13:39.619 "state": "online", 00:13:39.619 "raid_level": "raid1", 00:13:39.619 "superblock": true, 00:13:39.619 "num_base_bdevs": 2, 00:13:39.619 "num_base_bdevs_discovered": 1, 00:13:39.619 "num_base_bdevs_operational": 1, 00:13:39.619 "base_bdevs_list": [ 00:13:39.619 { 00:13:39.619 "name": null, 00:13:39.619 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:39.619 "is_configured": false, 00:13:39.619 "data_offset": 0, 00:13:39.619 "data_size": 63488 00:13:39.619 }, 00:13:39.619 { 00:13:39.619 "name": "BaseBdev2", 00:13:39.619 "uuid": "b3223abc-5be4-5ee7-b23e-1152bc2a20c4", 00:13:39.619 "is_configured": true, 00:13:39.619 "data_offset": 2048, 00:13:39.619 "data_size": 63488 00:13:39.619 } 00:13:39.619 ] 00:13:39.619 }' 00:13:39.619 19:11:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:39.619 19:11:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:39.619 19:11:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:39.878 19:11:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:39.879 19:11:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:39.879 19:11:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.879 19:11:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:39.879 [2024-11-27 19:11:49.304273] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:39.879 [2024-11-27 19:11:49.320400] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3430 00:13:39.879 19:11:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.879 19:11:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:39.879 [2024-11-27 19:11:49.322264] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:40.817 19:11:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:40.817 19:11:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:40.817 19:11:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:40.817 19:11:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:40.817 19:11:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:40.817 19:11:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:40.817 19:11:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:40.817 19:11:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.817 19:11:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:40.817 19:11:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.817 19:11:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:40.817 "name": "raid_bdev1", 00:13:40.817 "uuid": "dbeea84c-c4a3-47b7-8c50-c120e8cb6028", 00:13:40.817 "strip_size_kb": 0, 00:13:40.817 "state": "online", 00:13:40.817 "raid_level": "raid1", 00:13:40.817 "superblock": true, 00:13:40.817 "num_base_bdevs": 2, 00:13:40.817 "num_base_bdevs_discovered": 2, 00:13:40.817 "num_base_bdevs_operational": 2, 00:13:40.817 "process": { 00:13:40.818 "type": "rebuild", 00:13:40.818 "target": "spare", 00:13:40.818 "progress": { 00:13:40.818 "blocks": 20480, 00:13:40.818 "percent": 32 00:13:40.818 } 00:13:40.818 }, 00:13:40.818 "base_bdevs_list": [ 00:13:40.818 { 00:13:40.818 "name": "spare", 00:13:40.818 "uuid": "7651f4a3-32d7-50d3-8b29-98c19573e1bc", 00:13:40.818 "is_configured": true, 00:13:40.818 "data_offset": 2048, 00:13:40.818 "data_size": 63488 00:13:40.818 }, 00:13:40.818 { 00:13:40.818 "name": "BaseBdev2", 00:13:40.818 "uuid": "b3223abc-5be4-5ee7-b23e-1152bc2a20c4", 00:13:40.818 "is_configured": true, 00:13:40.818 "data_offset": 2048, 00:13:40.818 "data_size": 63488 00:13:40.818 } 00:13:40.818 ] 00:13:40.818 }' 00:13:40.818 19:11:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:40.818 19:11:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:40.818 19:11:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:40.818 19:11:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:40.818 19:11:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:13:40.818 19:11:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:13:40.818 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:13:40.818 19:11:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:13:40.818 19:11:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:40.818 19:11:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:13:40.818 19:11:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=392 00:13:40.818 19:11:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:40.818 19:11:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:40.818 19:11:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:40.818 19:11:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:40.818 19:11:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:40.818 19:11:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:40.818 19:11:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:40.818 19:11:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:40.818 19:11:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.818 19:11:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:41.079 19:11:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.079 19:11:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:41.079 "name": "raid_bdev1", 00:13:41.079 "uuid": "dbeea84c-c4a3-47b7-8c50-c120e8cb6028", 00:13:41.079 "strip_size_kb": 0, 00:13:41.079 "state": "online", 00:13:41.079 "raid_level": "raid1", 00:13:41.079 "superblock": true, 00:13:41.079 "num_base_bdevs": 2, 00:13:41.079 "num_base_bdevs_discovered": 2, 00:13:41.079 "num_base_bdevs_operational": 2, 00:13:41.079 "process": { 00:13:41.079 "type": "rebuild", 00:13:41.079 "target": "spare", 00:13:41.079 "progress": { 00:13:41.079 "blocks": 22528, 00:13:41.079 "percent": 35 00:13:41.079 } 00:13:41.079 }, 00:13:41.079 "base_bdevs_list": [ 00:13:41.079 { 00:13:41.079 "name": "spare", 00:13:41.079 "uuid": "7651f4a3-32d7-50d3-8b29-98c19573e1bc", 00:13:41.079 "is_configured": true, 00:13:41.079 "data_offset": 2048, 00:13:41.079 "data_size": 63488 00:13:41.079 }, 00:13:41.079 { 00:13:41.079 "name": "BaseBdev2", 00:13:41.079 "uuid": "b3223abc-5be4-5ee7-b23e-1152bc2a20c4", 00:13:41.079 "is_configured": true, 00:13:41.079 "data_offset": 2048, 00:13:41.079 "data_size": 63488 00:13:41.079 } 00:13:41.079 ] 00:13:41.079 }' 00:13:41.079 19:11:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:41.079 19:11:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:41.079 19:11:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:41.079 19:11:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:41.079 19:11:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:42.022 19:11:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:42.022 19:11:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:42.022 19:11:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:42.022 19:11:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:42.022 19:11:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:42.022 19:11:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:42.022 19:11:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:42.022 19:11:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:42.022 19:11:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.022 19:11:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:42.022 19:11:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.022 19:11:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:42.022 "name": "raid_bdev1", 00:13:42.022 "uuid": "dbeea84c-c4a3-47b7-8c50-c120e8cb6028", 00:13:42.022 "strip_size_kb": 0, 00:13:42.022 "state": "online", 00:13:42.022 "raid_level": "raid1", 00:13:42.022 "superblock": true, 00:13:42.022 "num_base_bdevs": 2, 00:13:42.022 "num_base_bdevs_discovered": 2, 00:13:42.022 "num_base_bdevs_operational": 2, 00:13:42.022 "process": { 00:13:42.022 "type": "rebuild", 00:13:42.022 "target": "spare", 00:13:42.022 "progress": { 00:13:42.022 "blocks": 45056, 00:13:42.022 "percent": 70 00:13:42.022 } 00:13:42.022 }, 00:13:42.022 "base_bdevs_list": [ 00:13:42.022 { 00:13:42.022 "name": "spare", 00:13:42.022 "uuid": "7651f4a3-32d7-50d3-8b29-98c19573e1bc", 00:13:42.022 "is_configured": true, 00:13:42.022 "data_offset": 2048, 00:13:42.022 "data_size": 63488 00:13:42.022 }, 00:13:42.022 { 00:13:42.022 "name": "BaseBdev2", 00:13:42.022 "uuid": "b3223abc-5be4-5ee7-b23e-1152bc2a20c4", 00:13:42.022 "is_configured": true, 00:13:42.022 "data_offset": 2048, 00:13:42.022 "data_size": 63488 00:13:42.022 } 00:13:42.022 ] 00:13:42.022 }' 00:13:42.022 19:11:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:42.282 19:11:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:42.282 19:11:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:42.282 19:11:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:42.282 19:11:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:42.853 [2024-11-27 19:11:52.434799] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:42.853 [2024-11-27 19:11:52.434870] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:42.853 [2024-11-27 19:11:52.434971] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:43.113 19:11:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:43.113 19:11:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:43.113 19:11:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:43.113 19:11:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:43.113 19:11:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:43.113 19:11:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:43.374 19:11:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:43.374 19:11:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.374 19:11:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:43.374 19:11:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:43.374 19:11:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.374 19:11:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:43.374 "name": "raid_bdev1", 00:13:43.374 "uuid": "dbeea84c-c4a3-47b7-8c50-c120e8cb6028", 00:13:43.374 "strip_size_kb": 0, 00:13:43.374 "state": "online", 00:13:43.374 "raid_level": "raid1", 00:13:43.374 "superblock": true, 00:13:43.374 "num_base_bdevs": 2, 00:13:43.374 "num_base_bdevs_discovered": 2, 00:13:43.374 "num_base_bdevs_operational": 2, 00:13:43.374 "base_bdevs_list": [ 00:13:43.374 { 00:13:43.374 "name": "spare", 00:13:43.374 "uuid": "7651f4a3-32d7-50d3-8b29-98c19573e1bc", 00:13:43.374 "is_configured": true, 00:13:43.374 "data_offset": 2048, 00:13:43.374 "data_size": 63488 00:13:43.374 }, 00:13:43.374 { 00:13:43.374 "name": "BaseBdev2", 00:13:43.374 "uuid": "b3223abc-5be4-5ee7-b23e-1152bc2a20c4", 00:13:43.374 "is_configured": true, 00:13:43.374 "data_offset": 2048, 00:13:43.374 "data_size": 63488 00:13:43.374 } 00:13:43.374 ] 00:13:43.374 }' 00:13:43.374 19:11:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:43.374 19:11:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:43.374 19:11:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:43.374 19:11:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:43.374 19:11:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:13:43.374 19:11:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:43.374 19:11:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:43.374 19:11:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:43.374 19:11:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:43.374 19:11:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:43.374 19:11:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:43.374 19:11:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:43.374 19:11:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.374 19:11:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:43.374 19:11:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.374 19:11:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:43.374 "name": "raid_bdev1", 00:13:43.374 "uuid": "dbeea84c-c4a3-47b7-8c50-c120e8cb6028", 00:13:43.374 "strip_size_kb": 0, 00:13:43.374 "state": "online", 00:13:43.374 "raid_level": "raid1", 00:13:43.374 "superblock": true, 00:13:43.374 "num_base_bdevs": 2, 00:13:43.374 "num_base_bdevs_discovered": 2, 00:13:43.374 "num_base_bdevs_operational": 2, 00:13:43.374 "base_bdevs_list": [ 00:13:43.374 { 00:13:43.374 "name": "spare", 00:13:43.374 "uuid": "7651f4a3-32d7-50d3-8b29-98c19573e1bc", 00:13:43.374 "is_configured": true, 00:13:43.374 "data_offset": 2048, 00:13:43.374 "data_size": 63488 00:13:43.374 }, 00:13:43.374 { 00:13:43.374 "name": "BaseBdev2", 00:13:43.374 "uuid": "b3223abc-5be4-5ee7-b23e-1152bc2a20c4", 00:13:43.374 "is_configured": true, 00:13:43.374 "data_offset": 2048, 00:13:43.374 "data_size": 63488 00:13:43.374 } 00:13:43.374 ] 00:13:43.374 }' 00:13:43.374 19:11:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:43.374 19:11:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:43.374 19:11:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:43.634 19:11:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:43.634 19:11:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:43.634 19:11:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:43.634 19:11:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:43.634 19:11:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:43.634 19:11:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:43.634 19:11:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:43.634 19:11:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:43.634 19:11:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:43.634 19:11:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:43.634 19:11:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:43.634 19:11:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:43.634 19:11:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:43.634 19:11:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.635 19:11:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:43.635 19:11:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.635 19:11:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:43.635 "name": "raid_bdev1", 00:13:43.635 "uuid": "dbeea84c-c4a3-47b7-8c50-c120e8cb6028", 00:13:43.635 "strip_size_kb": 0, 00:13:43.635 "state": "online", 00:13:43.635 "raid_level": "raid1", 00:13:43.635 "superblock": true, 00:13:43.635 "num_base_bdevs": 2, 00:13:43.635 "num_base_bdevs_discovered": 2, 00:13:43.635 "num_base_bdevs_operational": 2, 00:13:43.635 "base_bdevs_list": [ 00:13:43.635 { 00:13:43.635 "name": "spare", 00:13:43.635 "uuid": "7651f4a3-32d7-50d3-8b29-98c19573e1bc", 00:13:43.635 "is_configured": true, 00:13:43.635 "data_offset": 2048, 00:13:43.635 "data_size": 63488 00:13:43.635 }, 00:13:43.635 { 00:13:43.635 "name": "BaseBdev2", 00:13:43.635 "uuid": "b3223abc-5be4-5ee7-b23e-1152bc2a20c4", 00:13:43.635 "is_configured": true, 00:13:43.635 "data_offset": 2048, 00:13:43.635 "data_size": 63488 00:13:43.635 } 00:13:43.635 ] 00:13:43.635 }' 00:13:43.635 19:11:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:43.635 19:11:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:43.894 19:11:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:43.894 19:11:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.894 19:11:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:43.894 [2024-11-27 19:11:53.471333] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:43.894 [2024-11-27 19:11:53.471381] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:43.894 [2024-11-27 19:11:53.471470] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:43.894 [2024-11-27 19:11:53.471539] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:43.894 [2024-11-27 19:11:53.471549] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:43.894 19:11:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.894 19:11:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:43.894 19:11:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:13:43.894 19:11:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.894 19:11:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:43.894 19:11:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.894 19:11:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:43.895 19:11:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:43.895 19:11:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:13:43.895 19:11:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:13:43.895 19:11:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:43.895 19:11:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:13:43.895 19:11:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:43.895 19:11:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:43.895 19:11:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:43.895 19:11:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:13:43.895 19:11:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:43.895 19:11:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:43.895 19:11:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:13:44.156 /dev/nbd0 00:13:44.156 19:11:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:44.156 19:11:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:44.156 19:11:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:44.156 19:11:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:13:44.156 19:11:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:44.156 19:11:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:44.156 19:11:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:44.156 19:11:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:13:44.156 19:11:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:44.156 19:11:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:44.156 19:11:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:44.156 1+0 records in 00:13:44.156 1+0 records out 00:13:44.156 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000241264 s, 17.0 MB/s 00:13:44.156 19:11:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:44.156 19:11:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:13:44.156 19:11:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:44.156 19:11:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:44.156 19:11:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:13:44.156 19:11:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:44.156 19:11:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:44.156 19:11:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:13:44.416 /dev/nbd1 00:13:44.416 19:11:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:44.416 19:11:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:44.416 19:11:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:44.416 19:11:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:13:44.416 19:11:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:44.416 19:11:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:44.416 19:11:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:44.676 19:11:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:13:44.676 19:11:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:44.676 19:11:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:44.676 19:11:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:44.676 1+0 records in 00:13:44.676 1+0 records out 00:13:44.676 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000348297 s, 11.8 MB/s 00:13:44.676 19:11:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:44.676 19:11:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:13:44.676 19:11:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:44.676 19:11:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:44.676 19:11:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:13:44.676 19:11:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:44.676 19:11:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:44.676 19:11:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:13:44.676 19:11:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:13:44.676 19:11:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:44.676 19:11:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:44.677 19:11:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:44.677 19:11:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:13:44.677 19:11:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:44.677 19:11:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:44.937 19:11:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:44.937 19:11:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:44.937 19:11:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:44.937 19:11:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:44.937 19:11:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:44.937 19:11:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:44.937 19:11:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:13:44.937 19:11:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:13:44.937 19:11:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:44.937 19:11:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:45.198 19:11:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:45.198 19:11:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:45.198 19:11:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:45.198 19:11:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:45.198 19:11:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:45.198 19:11:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:45.198 19:11:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:13:45.198 19:11:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:13:45.198 19:11:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:13:45.198 19:11:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:13:45.198 19:11:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.198 19:11:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:45.198 19:11:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.198 19:11:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:45.198 19:11:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.198 19:11:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:45.198 [2024-11-27 19:11:54.712837] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:45.198 [2024-11-27 19:11:54.712962] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:45.198 [2024-11-27 19:11:54.712993] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:13:45.198 [2024-11-27 19:11:54.713002] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:45.198 [2024-11-27 19:11:54.715135] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:45.198 [2024-11-27 19:11:54.715172] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:45.198 [2024-11-27 19:11:54.715261] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:45.198 [2024-11-27 19:11:54.715305] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:45.198 [2024-11-27 19:11:54.715464] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:45.198 spare 00:13:45.198 19:11:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.198 19:11:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:13:45.198 19:11:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.198 19:11:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:45.198 [2024-11-27 19:11:54.815360] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:13:45.198 [2024-11-27 19:11:54.815389] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:45.198 [2024-11-27 19:11:54.815638] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1ae0 00:13:45.198 [2024-11-27 19:11:54.815869] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:13:45.198 [2024-11-27 19:11:54.815880] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:13:45.198 [2024-11-27 19:11:54.816021] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:45.198 19:11:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.198 19:11:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:45.198 19:11:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:45.198 19:11:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:45.198 19:11:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:45.198 19:11:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:45.198 19:11:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:45.198 19:11:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:45.198 19:11:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:45.198 19:11:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:45.198 19:11:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:45.198 19:11:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:45.198 19:11:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:45.198 19:11:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.198 19:11:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:45.458 19:11:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.458 19:11:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:45.458 "name": "raid_bdev1", 00:13:45.458 "uuid": "dbeea84c-c4a3-47b7-8c50-c120e8cb6028", 00:13:45.458 "strip_size_kb": 0, 00:13:45.458 "state": "online", 00:13:45.458 "raid_level": "raid1", 00:13:45.458 "superblock": true, 00:13:45.458 "num_base_bdevs": 2, 00:13:45.458 "num_base_bdevs_discovered": 2, 00:13:45.458 "num_base_bdevs_operational": 2, 00:13:45.458 "base_bdevs_list": [ 00:13:45.458 { 00:13:45.458 "name": "spare", 00:13:45.458 "uuid": "7651f4a3-32d7-50d3-8b29-98c19573e1bc", 00:13:45.458 "is_configured": true, 00:13:45.458 "data_offset": 2048, 00:13:45.458 "data_size": 63488 00:13:45.458 }, 00:13:45.458 { 00:13:45.458 "name": "BaseBdev2", 00:13:45.458 "uuid": "b3223abc-5be4-5ee7-b23e-1152bc2a20c4", 00:13:45.458 "is_configured": true, 00:13:45.458 "data_offset": 2048, 00:13:45.458 "data_size": 63488 00:13:45.458 } 00:13:45.458 ] 00:13:45.458 }' 00:13:45.458 19:11:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:45.458 19:11:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:45.717 19:11:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:45.717 19:11:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:45.717 19:11:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:45.717 19:11:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:45.717 19:11:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:45.717 19:11:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:45.717 19:11:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.717 19:11:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:45.717 19:11:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:45.717 19:11:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.717 19:11:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:45.717 "name": "raid_bdev1", 00:13:45.717 "uuid": "dbeea84c-c4a3-47b7-8c50-c120e8cb6028", 00:13:45.717 "strip_size_kb": 0, 00:13:45.717 "state": "online", 00:13:45.717 "raid_level": "raid1", 00:13:45.717 "superblock": true, 00:13:45.717 "num_base_bdevs": 2, 00:13:45.717 "num_base_bdevs_discovered": 2, 00:13:45.717 "num_base_bdevs_operational": 2, 00:13:45.717 "base_bdevs_list": [ 00:13:45.717 { 00:13:45.717 "name": "spare", 00:13:45.717 "uuid": "7651f4a3-32d7-50d3-8b29-98c19573e1bc", 00:13:45.717 "is_configured": true, 00:13:45.717 "data_offset": 2048, 00:13:45.717 "data_size": 63488 00:13:45.717 }, 00:13:45.717 { 00:13:45.717 "name": "BaseBdev2", 00:13:45.717 "uuid": "b3223abc-5be4-5ee7-b23e-1152bc2a20c4", 00:13:45.717 "is_configured": true, 00:13:45.717 "data_offset": 2048, 00:13:45.717 "data_size": 63488 00:13:45.717 } 00:13:45.717 ] 00:13:45.717 }' 00:13:45.717 19:11:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:45.975 19:11:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:45.975 19:11:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:45.975 19:11:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:45.975 19:11:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:45.975 19:11:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:13:45.975 19:11:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.975 19:11:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:45.975 19:11:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.975 19:11:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:13:45.975 19:11:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:45.975 19:11:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.975 19:11:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:45.975 [2024-11-27 19:11:55.467603] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:45.975 19:11:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.975 19:11:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:45.975 19:11:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:45.975 19:11:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:45.975 19:11:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:45.975 19:11:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:45.975 19:11:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:45.975 19:11:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:45.975 19:11:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:45.976 19:11:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:45.976 19:11:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:45.976 19:11:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:45.976 19:11:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.976 19:11:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:45.976 19:11:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:45.976 19:11:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.976 19:11:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:45.976 "name": "raid_bdev1", 00:13:45.976 "uuid": "dbeea84c-c4a3-47b7-8c50-c120e8cb6028", 00:13:45.976 "strip_size_kb": 0, 00:13:45.976 "state": "online", 00:13:45.976 "raid_level": "raid1", 00:13:45.976 "superblock": true, 00:13:45.976 "num_base_bdevs": 2, 00:13:45.976 "num_base_bdevs_discovered": 1, 00:13:45.976 "num_base_bdevs_operational": 1, 00:13:45.976 "base_bdevs_list": [ 00:13:45.976 { 00:13:45.976 "name": null, 00:13:45.976 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:45.976 "is_configured": false, 00:13:45.976 "data_offset": 0, 00:13:45.976 "data_size": 63488 00:13:45.976 }, 00:13:45.976 { 00:13:45.976 "name": "BaseBdev2", 00:13:45.976 "uuid": "b3223abc-5be4-5ee7-b23e-1152bc2a20c4", 00:13:45.976 "is_configured": true, 00:13:45.976 "data_offset": 2048, 00:13:45.976 "data_size": 63488 00:13:45.976 } 00:13:45.976 ] 00:13:45.976 }' 00:13:45.976 19:11:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:45.976 19:11:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:46.545 19:11:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:46.545 19:11:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.545 19:11:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:46.545 [2024-11-27 19:11:55.894997] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:46.545 [2024-11-27 19:11:55.895275] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:13:46.545 [2024-11-27 19:11:55.895337] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:46.545 [2024-11-27 19:11:55.895407] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:46.545 [2024-11-27 19:11:55.911311] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1bb0 00:13:46.545 19:11:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.545 19:11:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:13:46.545 [2024-11-27 19:11:55.913161] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:47.484 19:11:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:47.484 19:11:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:47.485 19:11:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:47.485 19:11:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:47.485 19:11:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:47.485 19:11:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:47.485 19:11:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:47.485 19:11:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.485 19:11:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.485 19:11:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.485 19:11:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:47.485 "name": "raid_bdev1", 00:13:47.485 "uuid": "dbeea84c-c4a3-47b7-8c50-c120e8cb6028", 00:13:47.485 "strip_size_kb": 0, 00:13:47.485 "state": "online", 00:13:47.485 "raid_level": "raid1", 00:13:47.485 "superblock": true, 00:13:47.485 "num_base_bdevs": 2, 00:13:47.485 "num_base_bdevs_discovered": 2, 00:13:47.485 "num_base_bdevs_operational": 2, 00:13:47.485 "process": { 00:13:47.485 "type": "rebuild", 00:13:47.485 "target": "spare", 00:13:47.485 "progress": { 00:13:47.485 "blocks": 20480, 00:13:47.485 "percent": 32 00:13:47.485 } 00:13:47.485 }, 00:13:47.485 "base_bdevs_list": [ 00:13:47.485 { 00:13:47.485 "name": "spare", 00:13:47.485 "uuid": "7651f4a3-32d7-50d3-8b29-98c19573e1bc", 00:13:47.485 "is_configured": true, 00:13:47.485 "data_offset": 2048, 00:13:47.485 "data_size": 63488 00:13:47.485 }, 00:13:47.485 { 00:13:47.485 "name": "BaseBdev2", 00:13:47.485 "uuid": "b3223abc-5be4-5ee7-b23e-1152bc2a20c4", 00:13:47.485 "is_configured": true, 00:13:47.485 "data_offset": 2048, 00:13:47.485 "data_size": 63488 00:13:47.485 } 00:13:47.485 ] 00:13:47.485 }' 00:13:47.485 19:11:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:47.485 19:11:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:47.485 19:11:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:47.485 19:11:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:47.485 19:11:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:13:47.485 19:11:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.485 19:11:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.485 [2024-11-27 19:11:57.057246] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:47.485 [2024-11-27 19:11:57.118180] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:47.485 [2024-11-27 19:11:57.118314] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:47.485 [2024-11-27 19:11:57.118330] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:47.485 [2024-11-27 19:11:57.118339] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:47.744 19:11:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.744 19:11:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:47.744 19:11:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:47.744 19:11:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:47.744 19:11:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:47.744 19:11:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:47.744 19:11:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:47.744 19:11:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:47.744 19:11:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:47.744 19:11:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:47.744 19:11:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:47.744 19:11:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:47.744 19:11:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:47.744 19:11:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.744 19:11:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.744 19:11:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.744 19:11:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:47.744 "name": "raid_bdev1", 00:13:47.744 "uuid": "dbeea84c-c4a3-47b7-8c50-c120e8cb6028", 00:13:47.744 "strip_size_kb": 0, 00:13:47.744 "state": "online", 00:13:47.744 "raid_level": "raid1", 00:13:47.744 "superblock": true, 00:13:47.744 "num_base_bdevs": 2, 00:13:47.744 "num_base_bdevs_discovered": 1, 00:13:47.744 "num_base_bdevs_operational": 1, 00:13:47.744 "base_bdevs_list": [ 00:13:47.744 { 00:13:47.744 "name": null, 00:13:47.744 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:47.744 "is_configured": false, 00:13:47.744 "data_offset": 0, 00:13:47.744 "data_size": 63488 00:13:47.744 }, 00:13:47.744 { 00:13:47.744 "name": "BaseBdev2", 00:13:47.744 "uuid": "b3223abc-5be4-5ee7-b23e-1152bc2a20c4", 00:13:47.744 "is_configured": true, 00:13:47.744 "data_offset": 2048, 00:13:47.744 "data_size": 63488 00:13:47.744 } 00:13:47.744 ] 00:13:47.744 }' 00:13:47.744 19:11:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:47.744 19:11:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:48.004 19:11:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:48.004 19:11:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.004 19:11:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:48.004 [2024-11-27 19:11:57.604082] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:48.004 [2024-11-27 19:11:57.604210] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:48.004 [2024-11-27 19:11:57.604248] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:13:48.004 [2024-11-27 19:11:57.604278] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:48.004 [2024-11-27 19:11:57.604773] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:48.004 [2024-11-27 19:11:57.604834] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:48.004 [2024-11-27 19:11:57.604951] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:48.004 [2024-11-27 19:11:57.604991] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:13:48.004 [2024-11-27 19:11:57.605036] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:48.004 [2024-11-27 19:11:57.605094] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:48.004 [2024-11-27 19:11:57.620935] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:13:48.004 spare 00:13:48.004 19:11:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.004 [2024-11-27 19:11:57.622789] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:48.004 19:11:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:13:49.398 19:11:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:49.398 19:11:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:49.398 19:11:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:49.399 19:11:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:49.399 19:11:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:49.399 19:11:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:49.399 19:11:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:49.399 19:11:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.399 19:11:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:49.399 19:11:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.399 19:11:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:49.399 "name": "raid_bdev1", 00:13:49.399 "uuid": "dbeea84c-c4a3-47b7-8c50-c120e8cb6028", 00:13:49.399 "strip_size_kb": 0, 00:13:49.399 "state": "online", 00:13:49.399 "raid_level": "raid1", 00:13:49.399 "superblock": true, 00:13:49.399 "num_base_bdevs": 2, 00:13:49.399 "num_base_bdevs_discovered": 2, 00:13:49.399 "num_base_bdevs_operational": 2, 00:13:49.399 "process": { 00:13:49.399 "type": "rebuild", 00:13:49.399 "target": "spare", 00:13:49.399 "progress": { 00:13:49.399 "blocks": 20480, 00:13:49.399 "percent": 32 00:13:49.399 } 00:13:49.399 }, 00:13:49.399 "base_bdevs_list": [ 00:13:49.399 { 00:13:49.399 "name": "spare", 00:13:49.399 "uuid": "7651f4a3-32d7-50d3-8b29-98c19573e1bc", 00:13:49.399 "is_configured": true, 00:13:49.399 "data_offset": 2048, 00:13:49.399 "data_size": 63488 00:13:49.399 }, 00:13:49.399 { 00:13:49.399 "name": "BaseBdev2", 00:13:49.399 "uuid": "b3223abc-5be4-5ee7-b23e-1152bc2a20c4", 00:13:49.399 "is_configured": true, 00:13:49.399 "data_offset": 2048, 00:13:49.399 "data_size": 63488 00:13:49.399 } 00:13:49.399 ] 00:13:49.399 }' 00:13:49.399 19:11:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:49.399 19:11:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:49.399 19:11:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:49.399 19:11:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:49.399 19:11:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:13:49.399 19:11:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.399 19:11:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:49.399 [2024-11-27 19:11:58.766977] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:49.399 [2024-11-27 19:11:58.827957] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:49.399 [2024-11-27 19:11:58.828071] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:49.399 [2024-11-27 19:11:58.828110] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:49.399 [2024-11-27 19:11:58.828131] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:49.399 19:11:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.399 19:11:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:49.399 19:11:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:49.399 19:11:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:49.399 19:11:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:49.399 19:11:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:49.399 19:11:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:49.399 19:11:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:49.399 19:11:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:49.399 19:11:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:49.399 19:11:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:49.399 19:11:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:49.399 19:11:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.399 19:11:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:49.399 19:11:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:49.399 19:11:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.399 19:11:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:49.399 "name": "raid_bdev1", 00:13:49.399 "uuid": "dbeea84c-c4a3-47b7-8c50-c120e8cb6028", 00:13:49.399 "strip_size_kb": 0, 00:13:49.399 "state": "online", 00:13:49.399 "raid_level": "raid1", 00:13:49.399 "superblock": true, 00:13:49.399 "num_base_bdevs": 2, 00:13:49.399 "num_base_bdevs_discovered": 1, 00:13:49.399 "num_base_bdevs_operational": 1, 00:13:49.399 "base_bdevs_list": [ 00:13:49.399 { 00:13:49.399 "name": null, 00:13:49.399 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:49.399 "is_configured": false, 00:13:49.399 "data_offset": 0, 00:13:49.399 "data_size": 63488 00:13:49.399 }, 00:13:49.399 { 00:13:49.399 "name": "BaseBdev2", 00:13:49.399 "uuid": "b3223abc-5be4-5ee7-b23e-1152bc2a20c4", 00:13:49.399 "is_configured": true, 00:13:49.399 "data_offset": 2048, 00:13:49.399 "data_size": 63488 00:13:49.399 } 00:13:49.399 ] 00:13:49.399 }' 00:13:49.399 19:11:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:49.399 19:11:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:49.685 19:11:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:49.685 19:11:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:49.685 19:11:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:49.685 19:11:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:49.685 19:11:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:49.685 19:11:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:49.685 19:11:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:49.685 19:11:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.685 19:11:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:49.956 19:11:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.956 19:11:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:49.956 "name": "raid_bdev1", 00:13:49.956 "uuid": "dbeea84c-c4a3-47b7-8c50-c120e8cb6028", 00:13:49.956 "strip_size_kb": 0, 00:13:49.956 "state": "online", 00:13:49.956 "raid_level": "raid1", 00:13:49.956 "superblock": true, 00:13:49.956 "num_base_bdevs": 2, 00:13:49.956 "num_base_bdevs_discovered": 1, 00:13:49.956 "num_base_bdevs_operational": 1, 00:13:49.956 "base_bdevs_list": [ 00:13:49.956 { 00:13:49.956 "name": null, 00:13:49.956 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:49.956 "is_configured": false, 00:13:49.956 "data_offset": 0, 00:13:49.956 "data_size": 63488 00:13:49.956 }, 00:13:49.956 { 00:13:49.956 "name": "BaseBdev2", 00:13:49.956 "uuid": "b3223abc-5be4-5ee7-b23e-1152bc2a20c4", 00:13:49.956 "is_configured": true, 00:13:49.956 "data_offset": 2048, 00:13:49.956 "data_size": 63488 00:13:49.956 } 00:13:49.956 ] 00:13:49.956 }' 00:13:49.956 19:11:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:49.956 19:11:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:49.956 19:11:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:49.956 19:11:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:49.956 19:11:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:13:49.956 19:11:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.956 19:11:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:49.956 19:11:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.956 19:11:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:49.956 19:11:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.956 19:11:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:49.956 [2024-11-27 19:11:59.445227] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:49.956 [2024-11-27 19:11:59.445288] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:49.956 [2024-11-27 19:11:59.445318] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:13:49.956 [2024-11-27 19:11:59.445335] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:49.956 [2024-11-27 19:11:59.445787] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:49.956 [2024-11-27 19:11:59.445813] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:49.956 [2024-11-27 19:11:59.445894] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:13:49.956 [2024-11-27 19:11:59.445907] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:13:49.956 [2024-11-27 19:11:59.445916] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:49.956 [2024-11-27 19:11:59.445928] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:13:49.956 BaseBdev1 00:13:49.956 19:11:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.956 19:11:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:13:50.895 19:12:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:50.895 19:12:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:50.895 19:12:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:50.895 19:12:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:50.895 19:12:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:50.895 19:12:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:50.895 19:12:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:50.895 19:12:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:50.895 19:12:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:50.895 19:12:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:50.895 19:12:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:50.895 19:12:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:50.895 19:12:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.895 19:12:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:50.895 19:12:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.895 19:12:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:50.895 "name": "raid_bdev1", 00:13:50.895 "uuid": "dbeea84c-c4a3-47b7-8c50-c120e8cb6028", 00:13:50.895 "strip_size_kb": 0, 00:13:50.895 "state": "online", 00:13:50.895 "raid_level": "raid1", 00:13:50.895 "superblock": true, 00:13:50.895 "num_base_bdevs": 2, 00:13:50.895 "num_base_bdevs_discovered": 1, 00:13:50.895 "num_base_bdevs_operational": 1, 00:13:50.895 "base_bdevs_list": [ 00:13:50.895 { 00:13:50.895 "name": null, 00:13:50.895 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:50.895 "is_configured": false, 00:13:50.895 "data_offset": 0, 00:13:50.895 "data_size": 63488 00:13:50.895 }, 00:13:50.895 { 00:13:50.895 "name": "BaseBdev2", 00:13:50.895 "uuid": "b3223abc-5be4-5ee7-b23e-1152bc2a20c4", 00:13:50.895 "is_configured": true, 00:13:50.895 "data_offset": 2048, 00:13:50.895 "data_size": 63488 00:13:50.895 } 00:13:50.895 ] 00:13:50.895 }' 00:13:50.895 19:12:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:50.895 19:12:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.465 19:12:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:51.465 19:12:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:51.465 19:12:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:51.465 19:12:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:51.465 19:12:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:51.465 19:12:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:51.465 19:12:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:51.465 19:12:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.465 19:12:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.465 19:12:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.465 19:12:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:51.465 "name": "raid_bdev1", 00:13:51.465 "uuid": "dbeea84c-c4a3-47b7-8c50-c120e8cb6028", 00:13:51.465 "strip_size_kb": 0, 00:13:51.465 "state": "online", 00:13:51.465 "raid_level": "raid1", 00:13:51.465 "superblock": true, 00:13:51.465 "num_base_bdevs": 2, 00:13:51.465 "num_base_bdevs_discovered": 1, 00:13:51.465 "num_base_bdevs_operational": 1, 00:13:51.465 "base_bdevs_list": [ 00:13:51.465 { 00:13:51.465 "name": null, 00:13:51.465 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:51.465 "is_configured": false, 00:13:51.465 "data_offset": 0, 00:13:51.465 "data_size": 63488 00:13:51.465 }, 00:13:51.465 { 00:13:51.465 "name": "BaseBdev2", 00:13:51.465 "uuid": "b3223abc-5be4-5ee7-b23e-1152bc2a20c4", 00:13:51.465 "is_configured": true, 00:13:51.465 "data_offset": 2048, 00:13:51.465 "data_size": 63488 00:13:51.465 } 00:13:51.465 ] 00:13:51.465 }' 00:13:51.465 19:12:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:51.465 19:12:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:51.465 19:12:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:51.465 19:12:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:51.465 19:12:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:51.465 19:12:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:13:51.465 19:12:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:51.465 19:12:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:13:51.725 19:12:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:51.725 19:12:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:13:51.725 19:12:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:51.725 19:12:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:51.725 19:12:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.725 19:12:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.725 [2024-11-27 19:12:01.110564] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:51.725 [2024-11-27 19:12:01.110821] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:13:51.725 [2024-11-27 19:12:01.110865] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:51.725 request: 00:13:51.725 { 00:13:51.725 "base_bdev": "BaseBdev1", 00:13:51.725 "raid_bdev": "raid_bdev1", 00:13:51.725 "method": "bdev_raid_add_base_bdev", 00:13:51.725 "req_id": 1 00:13:51.725 } 00:13:51.725 Got JSON-RPC error response 00:13:51.725 response: 00:13:51.725 { 00:13:51.725 "code": -22, 00:13:51.725 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:13:51.725 } 00:13:51.725 19:12:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:13:51.725 19:12:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:13:51.725 19:12:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:51.725 19:12:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:51.725 19:12:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:51.725 19:12:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:13:52.665 19:12:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:52.665 19:12:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:52.665 19:12:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:52.665 19:12:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:52.665 19:12:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:52.665 19:12:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:52.665 19:12:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:52.665 19:12:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:52.665 19:12:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:52.665 19:12:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:52.665 19:12:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:52.665 19:12:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:52.665 19:12:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.665 19:12:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.665 19:12:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.665 19:12:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:52.665 "name": "raid_bdev1", 00:13:52.665 "uuid": "dbeea84c-c4a3-47b7-8c50-c120e8cb6028", 00:13:52.665 "strip_size_kb": 0, 00:13:52.665 "state": "online", 00:13:52.665 "raid_level": "raid1", 00:13:52.665 "superblock": true, 00:13:52.665 "num_base_bdevs": 2, 00:13:52.665 "num_base_bdevs_discovered": 1, 00:13:52.665 "num_base_bdevs_operational": 1, 00:13:52.665 "base_bdevs_list": [ 00:13:52.665 { 00:13:52.665 "name": null, 00:13:52.665 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:52.665 "is_configured": false, 00:13:52.665 "data_offset": 0, 00:13:52.665 "data_size": 63488 00:13:52.665 }, 00:13:52.665 { 00:13:52.665 "name": "BaseBdev2", 00:13:52.665 "uuid": "b3223abc-5be4-5ee7-b23e-1152bc2a20c4", 00:13:52.665 "is_configured": true, 00:13:52.665 "data_offset": 2048, 00:13:52.665 "data_size": 63488 00:13:52.665 } 00:13:52.665 ] 00:13:52.665 }' 00:13:52.665 19:12:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:52.665 19:12:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.235 19:12:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:53.235 19:12:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:53.235 19:12:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:53.235 19:12:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:53.235 19:12:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:53.235 19:12:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:53.235 19:12:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:53.235 19:12:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.235 19:12:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.235 19:12:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.235 19:12:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:53.235 "name": "raid_bdev1", 00:13:53.235 "uuid": "dbeea84c-c4a3-47b7-8c50-c120e8cb6028", 00:13:53.235 "strip_size_kb": 0, 00:13:53.235 "state": "online", 00:13:53.235 "raid_level": "raid1", 00:13:53.235 "superblock": true, 00:13:53.235 "num_base_bdevs": 2, 00:13:53.235 "num_base_bdevs_discovered": 1, 00:13:53.235 "num_base_bdevs_operational": 1, 00:13:53.235 "base_bdevs_list": [ 00:13:53.235 { 00:13:53.235 "name": null, 00:13:53.235 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:53.235 "is_configured": false, 00:13:53.235 "data_offset": 0, 00:13:53.235 "data_size": 63488 00:13:53.235 }, 00:13:53.235 { 00:13:53.235 "name": "BaseBdev2", 00:13:53.235 "uuid": "b3223abc-5be4-5ee7-b23e-1152bc2a20c4", 00:13:53.235 "is_configured": true, 00:13:53.235 "data_offset": 2048, 00:13:53.235 "data_size": 63488 00:13:53.235 } 00:13:53.235 ] 00:13:53.235 }' 00:13:53.235 19:12:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:53.235 19:12:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:53.235 19:12:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:53.235 19:12:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:53.235 19:12:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 75843 00:13:53.235 19:12:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 75843 ']' 00:13:53.235 19:12:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 75843 00:13:53.235 19:12:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:13:53.235 19:12:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:53.235 19:12:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75843 00:13:53.235 killing process with pid 75843 00:13:53.235 Received shutdown signal, test time was about 60.000000 seconds 00:13:53.235 00:13:53.235 Latency(us) 00:13:53.235 [2024-11-27T19:12:02.871Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:53.235 [2024-11-27T19:12:02.871Z] =================================================================================================================== 00:13:53.235 [2024-11-27T19:12:02.871Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:53.235 19:12:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:53.235 19:12:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:53.235 19:12:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75843' 00:13:53.235 19:12:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 75843 00:13:53.235 [2024-11-27 19:12:02.748652] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:53.235 19:12:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 75843 00:13:53.235 [2024-11-27 19:12:02.748830] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:53.235 [2024-11-27 19:12:02.748904] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:53.235 [2024-11-27 19:12:02.748918] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:13:53.495 [2024-11-27 19:12:03.077027] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:54.879 19:12:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:13:54.879 00:13:54.879 real 0m23.693s 00:13:54.879 user 0m28.139s 00:13:54.879 sys 0m3.774s 00:13:54.879 19:12:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:54.879 19:12:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.879 ************************************ 00:13:54.879 END TEST raid_rebuild_test_sb 00:13:54.879 ************************************ 00:13:54.879 19:12:04 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true true 00:13:54.879 19:12:04 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:13:54.879 19:12:04 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:54.879 19:12:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:54.879 ************************************ 00:13:54.879 START TEST raid_rebuild_test_io 00:13:54.879 ************************************ 00:13:54.879 19:12:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 false true true 00:13:54.879 19:12:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:54.879 19:12:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:13:54.879 19:12:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:13:54.879 19:12:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:13:54.879 19:12:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:54.879 19:12:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:54.879 19:12:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:54.879 19:12:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:54.879 19:12:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:54.879 19:12:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:54.879 19:12:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:54.879 19:12:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:54.879 19:12:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:54.879 19:12:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:13:54.879 19:12:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:54.879 19:12:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:54.879 19:12:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:54.879 19:12:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:54.879 19:12:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:54.879 19:12:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:54.879 19:12:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:54.879 19:12:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:54.879 19:12:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:13:54.879 19:12:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=76576 00:13:54.879 19:12:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 76576 00:13:54.879 19:12:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:54.879 19:12:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # '[' -z 76576 ']' 00:13:54.879 19:12:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:54.879 19:12:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:54.879 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:54.879 19:12:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:54.879 19:12:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:54.879 19:12:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:54.879 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:54.879 Zero copy mechanism will not be used. 00:13:54.879 [2024-11-27 19:12:04.455986] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:13:54.879 [2024-11-27 19:12:04.456108] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76576 ] 00:13:55.139 [2024-11-27 19:12:04.636417] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:55.139 [2024-11-27 19:12:04.751306] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:55.400 [2024-11-27 19:12:04.945052] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:55.400 [2024-11-27 19:12:04.945117] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:55.660 19:12:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:55.660 19:12:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # return 0 00:13:55.660 19:12:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:55.660 19:12:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:55.660 19:12:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.660 19:12:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:55.920 BaseBdev1_malloc 00:13:55.920 19:12:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.920 19:12:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:55.920 19:12:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.920 19:12:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:55.920 [2024-11-27 19:12:05.323657] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:55.920 [2024-11-27 19:12:05.323731] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:55.920 [2024-11-27 19:12:05.323755] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:55.920 [2024-11-27 19:12:05.323766] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:55.920 [2024-11-27 19:12:05.325764] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:55.920 [2024-11-27 19:12:05.325801] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:55.920 BaseBdev1 00:13:55.920 19:12:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.920 19:12:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:55.920 19:12:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:55.920 19:12:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.920 19:12:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:55.920 BaseBdev2_malloc 00:13:55.920 19:12:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.920 19:12:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:55.920 19:12:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.920 19:12:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:55.920 [2024-11-27 19:12:05.377431] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:55.920 [2024-11-27 19:12:05.377491] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:55.920 [2024-11-27 19:12:05.377515] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:55.920 [2024-11-27 19:12:05.377525] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:55.920 [2024-11-27 19:12:05.379512] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:55.920 [2024-11-27 19:12:05.379553] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:55.920 BaseBdev2 00:13:55.920 19:12:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.920 19:12:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:55.920 19:12:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.920 19:12:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:55.920 spare_malloc 00:13:55.920 19:12:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.920 19:12:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:55.920 19:12:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.920 19:12:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:55.920 spare_delay 00:13:55.920 19:12:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.920 19:12:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:55.920 19:12:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.920 19:12:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:55.920 [2024-11-27 19:12:05.456581] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:55.920 [2024-11-27 19:12:05.456643] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:55.920 [2024-11-27 19:12:05.456662] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:13:55.920 [2024-11-27 19:12:05.456673] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:55.920 [2024-11-27 19:12:05.458662] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:55.920 [2024-11-27 19:12:05.458709] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:55.920 spare 00:13:55.920 19:12:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.920 19:12:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:13:55.920 19:12:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.920 19:12:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:55.920 [2024-11-27 19:12:05.468631] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:55.920 [2024-11-27 19:12:05.470346] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:55.920 [2024-11-27 19:12:05.470434] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:55.921 [2024-11-27 19:12:05.470447] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:13:55.921 [2024-11-27 19:12:05.470674] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:13:55.921 [2024-11-27 19:12:05.470846] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:55.921 [2024-11-27 19:12:05.470864] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:55.921 [2024-11-27 19:12:05.471010] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:55.921 19:12:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.921 19:12:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:55.921 19:12:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:55.921 19:12:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:55.921 19:12:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:55.921 19:12:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:55.921 19:12:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:55.921 19:12:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:55.921 19:12:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:55.921 19:12:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:55.921 19:12:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:55.921 19:12:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:55.921 19:12:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:55.921 19:12:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.921 19:12:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:55.921 19:12:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.921 19:12:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:55.921 "name": "raid_bdev1", 00:13:55.921 "uuid": "0dc8566b-2187-440f-b159-de32d1a35838", 00:13:55.921 "strip_size_kb": 0, 00:13:55.921 "state": "online", 00:13:55.921 "raid_level": "raid1", 00:13:55.921 "superblock": false, 00:13:55.921 "num_base_bdevs": 2, 00:13:55.921 "num_base_bdevs_discovered": 2, 00:13:55.921 "num_base_bdevs_operational": 2, 00:13:55.921 "base_bdevs_list": [ 00:13:55.921 { 00:13:55.921 "name": "BaseBdev1", 00:13:55.921 "uuid": "321fc6ae-6f88-5eae-ac3a-0bdf196566df", 00:13:55.921 "is_configured": true, 00:13:55.921 "data_offset": 0, 00:13:55.921 "data_size": 65536 00:13:55.921 }, 00:13:55.921 { 00:13:55.921 "name": "BaseBdev2", 00:13:55.921 "uuid": "a2c10766-1e8f-5227-a0a8-74255a56e2cf", 00:13:55.921 "is_configured": true, 00:13:55.921 "data_offset": 0, 00:13:55.921 "data_size": 65536 00:13:55.921 } 00:13:55.921 ] 00:13:55.921 }' 00:13:55.921 19:12:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:55.921 19:12:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:56.490 19:12:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:56.490 19:12:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.490 19:12:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:56.490 19:12:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:56.490 [2024-11-27 19:12:05.904150] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:56.490 19:12:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.490 19:12:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:13:56.490 19:12:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:56.490 19:12:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.490 19:12:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:56.490 19:12:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:56.490 19:12:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.490 19:12:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:13:56.490 19:12:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:13:56.490 19:12:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:56.490 19:12:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:56.491 19:12:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.491 19:12:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:56.491 [2024-11-27 19:12:06.007770] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:56.491 19:12:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.491 19:12:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:56.491 19:12:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:56.491 19:12:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:56.491 19:12:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:56.491 19:12:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:56.491 19:12:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:56.491 19:12:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:56.491 19:12:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:56.491 19:12:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:56.491 19:12:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:56.491 19:12:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:56.491 19:12:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.491 19:12:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:56.491 19:12:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:56.491 19:12:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.491 19:12:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:56.491 "name": "raid_bdev1", 00:13:56.491 "uuid": "0dc8566b-2187-440f-b159-de32d1a35838", 00:13:56.491 "strip_size_kb": 0, 00:13:56.491 "state": "online", 00:13:56.491 "raid_level": "raid1", 00:13:56.491 "superblock": false, 00:13:56.491 "num_base_bdevs": 2, 00:13:56.491 "num_base_bdevs_discovered": 1, 00:13:56.491 "num_base_bdevs_operational": 1, 00:13:56.491 "base_bdevs_list": [ 00:13:56.491 { 00:13:56.491 "name": null, 00:13:56.491 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:56.491 "is_configured": false, 00:13:56.491 "data_offset": 0, 00:13:56.491 "data_size": 65536 00:13:56.491 }, 00:13:56.491 { 00:13:56.491 "name": "BaseBdev2", 00:13:56.491 "uuid": "a2c10766-1e8f-5227-a0a8-74255a56e2cf", 00:13:56.491 "is_configured": true, 00:13:56.491 "data_offset": 0, 00:13:56.491 "data_size": 65536 00:13:56.491 } 00:13:56.491 ] 00:13:56.491 }' 00:13:56.491 19:12:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:56.491 19:12:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:56.491 [2024-11-27 19:12:06.107437] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:13:56.491 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:56.491 Zero copy mechanism will not be used. 00:13:56.491 Running I/O for 60 seconds... 00:13:57.061 19:12:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:57.061 19:12:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.061 19:12:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:57.061 [2024-11-27 19:12:06.478307] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:57.061 19:12:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.061 19:12:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:57.061 [2024-11-27 19:12:06.542343] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:13:57.061 [2024-11-27 19:12:06.544166] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:57.061 [2024-11-27 19:12:06.657574] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:57.061 [2024-11-27 19:12:06.658185] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:57.321 [2024-11-27 19:12:06.783335] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:57.321 [2024-11-27 19:12:06.783655] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:57.841 174.00 IOPS, 522.00 MiB/s [2024-11-27T19:12:07.477Z] [2024-11-27 19:12:07.252984] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:57.841 [2024-11-27 19:12:07.253425] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:58.101 19:12:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:58.101 19:12:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:58.101 19:12:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:58.101 19:12:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:58.101 19:12:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:58.101 19:12:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:58.101 19:12:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:58.101 19:12:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.101 19:12:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:58.101 19:12:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.101 19:12:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:58.101 "name": "raid_bdev1", 00:13:58.101 "uuid": "0dc8566b-2187-440f-b159-de32d1a35838", 00:13:58.101 "strip_size_kb": 0, 00:13:58.101 "state": "online", 00:13:58.101 "raid_level": "raid1", 00:13:58.101 "superblock": false, 00:13:58.101 "num_base_bdevs": 2, 00:13:58.101 "num_base_bdevs_discovered": 2, 00:13:58.101 "num_base_bdevs_operational": 2, 00:13:58.101 "process": { 00:13:58.101 "type": "rebuild", 00:13:58.101 "target": "spare", 00:13:58.101 "progress": { 00:13:58.101 "blocks": 12288, 00:13:58.101 "percent": 18 00:13:58.101 } 00:13:58.101 }, 00:13:58.101 "base_bdevs_list": [ 00:13:58.101 { 00:13:58.101 "name": "spare", 00:13:58.101 "uuid": "19efcad7-b9f0-59df-8330-556bf5fb9427", 00:13:58.101 "is_configured": true, 00:13:58.101 "data_offset": 0, 00:13:58.101 "data_size": 65536 00:13:58.101 }, 00:13:58.101 { 00:13:58.101 "name": "BaseBdev2", 00:13:58.101 "uuid": "a2c10766-1e8f-5227-a0a8-74255a56e2cf", 00:13:58.101 "is_configured": true, 00:13:58.101 "data_offset": 0, 00:13:58.101 "data_size": 65536 00:13:58.101 } 00:13:58.101 ] 00:13:58.101 }' 00:13:58.101 19:12:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:58.101 19:12:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:58.102 19:12:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:58.102 19:12:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:58.102 19:12:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:58.102 19:12:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.102 19:12:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:58.102 [2024-11-27 19:12:07.660216] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:58.102 [2024-11-27 19:12:07.692043] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:13:58.102 [2024-11-27 19:12:07.720062] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:58.102 [2024-11-27 19:12:07.722550] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:58.102 [2024-11-27 19:12:07.722626] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:58.102 [2024-11-27 19:12:07.722652] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:58.361 [2024-11-27 19:12:07.758892] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:13:58.361 19:12:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.361 19:12:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:58.361 19:12:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:58.361 19:12:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:58.361 19:12:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:58.361 19:12:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:58.361 19:12:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:58.361 19:12:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:58.361 19:12:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:58.361 19:12:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:58.361 19:12:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:58.361 19:12:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:58.361 19:12:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:58.361 19:12:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.361 19:12:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:58.361 19:12:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.361 19:12:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:58.361 "name": "raid_bdev1", 00:13:58.362 "uuid": "0dc8566b-2187-440f-b159-de32d1a35838", 00:13:58.362 "strip_size_kb": 0, 00:13:58.362 "state": "online", 00:13:58.362 "raid_level": "raid1", 00:13:58.362 "superblock": false, 00:13:58.362 "num_base_bdevs": 2, 00:13:58.362 "num_base_bdevs_discovered": 1, 00:13:58.362 "num_base_bdevs_operational": 1, 00:13:58.362 "base_bdevs_list": [ 00:13:58.362 { 00:13:58.362 "name": null, 00:13:58.362 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:58.362 "is_configured": false, 00:13:58.362 "data_offset": 0, 00:13:58.362 "data_size": 65536 00:13:58.362 }, 00:13:58.362 { 00:13:58.362 "name": "BaseBdev2", 00:13:58.362 "uuid": "a2c10766-1e8f-5227-a0a8-74255a56e2cf", 00:13:58.362 "is_configured": true, 00:13:58.362 "data_offset": 0, 00:13:58.362 "data_size": 65536 00:13:58.362 } 00:13:58.362 ] 00:13:58.362 }' 00:13:58.362 19:12:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:58.362 19:12:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:58.621 161.50 IOPS, 484.50 MiB/s [2024-11-27T19:12:08.257Z] 19:12:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:58.621 19:12:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:58.621 19:12:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:58.621 19:12:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:58.621 19:12:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:58.621 19:12:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:58.621 19:12:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:58.621 19:12:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.621 19:12:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:58.881 19:12:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.881 19:12:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:58.881 "name": "raid_bdev1", 00:13:58.881 "uuid": "0dc8566b-2187-440f-b159-de32d1a35838", 00:13:58.881 "strip_size_kb": 0, 00:13:58.881 "state": "online", 00:13:58.881 "raid_level": "raid1", 00:13:58.881 "superblock": false, 00:13:58.881 "num_base_bdevs": 2, 00:13:58.881 "num_base_bdevs_discovered": 1, 00:13:58.881 "num_base_bdevs_operational": 1, 00:13:58.881 "base_bdevs_list": [ 00:13:58.881 { 00:13:58.881 "name": null, 00:13:58.881 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:58.881 "is_configured": false, 00:13:58.881 "data_offset": 0, 00:13:58.881 "data_size": 65536 00:13:58.881 }, 00:13:58.881 { 00:13:58.881 "name": "BaseBdev2", 00:13:58.881 "uuid": "a2c10766-1e8f-5227-a0a8-74255a56e2cf", 00:13:58.881 "is_configured": true, 00:13:58.881 "data_offset": 0, 00:13:58.881 "data_size": 65536 00:13:58.881 } 00:13:58.881 ] 00:13:58.881 }' 00:13:58.881 19:12:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:58.881 19:12:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:58.881 19:12:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:58.881 19:12:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:58.881 19:12:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:58.881 19:12:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.881 19:12:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:58.881 [2024-11-27 19:12:08.390321] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:58.881 19:12:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.881 19:12:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:58.881 [2024-11-27 19:12:08.444562] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:13:58.881 [2024-11-27 19:12:08.446472] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:59.141 [2024-11-27 19:12:08.559830] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:59.141 [2024-11-27 19:12:08.560453] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:59.141 [2024-11-27 19:12:08.768789] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:59.141 [2024-11-27 19:12:08.769110] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:59.401 [2024-11-27 19:12:08.990687] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:59.705 [2024-11-27 19:12:09.102707] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:59.705 [2024-11-27 19:12:09.103137] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:00.012 165.00 IOPS, 495.00 MiB/s [2024-11-27T19:12:09.648Z] 19:12:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:00.012 19:12:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:00.012 19:12:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:00.012 19:12:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:00.012 19:12:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:00.012 19:12:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:00.012 19:12:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:00.012 19:12:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.012 19:12:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:00.012 [2024-11-27 19:12:09.446321] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:00.012 19:12:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.012 19:12:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:00.012 "name": "raid_bdev1", 00:14:00.012 "uuid": "0dc8566b-2187-440f-b159-de32d1a35838", 00:14:00.012 "strip_size_kb": 0, 00:14:00.012 "state": "online", 00:14:00.012 "raid_level": "raid1", 00:14:00.012 "superblock": false, 00:14:00.012 "num_base_bdevs": 2, 00:14:00.012 "num_base_bdevs_discovered": 2, 00:14:00.012 "num_base_bdevs_operational": 2, 00:14:00.012 "process": { 00:14:00.012 "type": "rebuild", 00:14:00.012 "target": "spare", 00:14:00.012 "progress": { 00:14:00.012 "blocks": 12288, 00:14:00.012 "percent": 18 00:14:00.012 } 00:14:00.012 }, 00:14:00.012 "base_bdevs_list": [ 00:14:00.012 { 00:14:00.012 "name": "spare", 00:14:00.012 "uuid": "19efcad7-b9f0-59df-8330-556bf5fb9427", 00:14:00.012 "is_configured": true, 00:14:00.012 "data_offset": 0, 00:14:00.012 "data_size": 65536 00:14:00.012 }, 00:14:00.012 { 00:14:00.012 "name": "BaseBdev2", 00:14:00.012 "uuid": "a2c10766-1e8f-5227-a0a8-74255a56e2cf", 00:14:00.012 "is_configured": true, 00:14:00.012 "data_offset": 0, 00:14:00.012 "data_size": 65536 00:14:00.012 } 00:14:00.012 ] 00:14:00.012 }' 00:14:00.012 19:12:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:00.012 19:12:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:00.012 19:12:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:00.012 19:12:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:00.012 19:12:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:14:00.012 19:12:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:14:00.012 19:12:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:14:00.012 19:12:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:14:00.012 19:12:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=411 00:14:00.012 19:12:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:00.012 19:12:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:00.012 19:12:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:00.012 19:12:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:00.012 19:12:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:00.012 19:12:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:00.012 [2024-11-27 19:12:09.560836] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:14:00.012 19:12:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:00.012 19:12:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:00.012 19:12:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.012 19:12:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:00.012 19:12:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.012 19:12:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:00.012 "name": "raid_bdev1", 00:14:00.012 "uuid": "0dc8566b-2187-440f-b159-de32d1a35838", 00:14:00.012 "strip_size_kb": 0, 00:14:00.012 "state": "online", 00:14:00.012 "raid_level": "raid1", 00:14:00.012 "superblock": false, 00:14:00.012 "num_base_bdevs": 2, 00:14:00.012 "num_base_bdevs_discovered": 2, 00:14:00.012 "num_base_bdevs_operational": 2, 00:14:00.012 "process": { 00:14:00.012 "type": "rebuild", 00:14:00.012 "target": "spare", 00:14:00.012 "progress": { 00:14:00.012 "blocks": 16384, 00:14:00.012 "percent": 25 00:14:00.012 } 00:14:00.012 }, 00:14:00.012 "base_bdevs_list": [ 00:14:00.012 { 00:14:00.012 "name": "spare", 00:14:00.012 "uuid": "19efcad7-b9f0-59df-8330-556bf5fb9427", 00:14:00.012 "is_configured": true, 00:14:00.012 "data_offset": 0, 00:14:00.012 "data_size": 65536 00:14:00.012 }, 00:14:00.012 { 00:14:00.012 "name": "BaseBdev2", 00:14:00.012 "uuid": "a2c10766-1e8f-5227-a0a8-74255a56e2cf", 00:14:00.012 "is_configured": true, 00:14:00.012 "data_offset": 0, 00:14:00.012 "data_size": 65536 00:14:00.012 } 00:14:00.012 ] 00:14:00.012 }' 00:14:00.012 19:12:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:00.272 19:12:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:00.273 19:12:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:00.273 19:12:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:00.273 19:12:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:00.273 [2024-11-27 19:12:09.801186] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:14:00.273 [2024-11-27 19:12:09.801915] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:14:01.102 137.00 IOPS, 411.00 MiB/s [2024-11-27T19:12:10.738Z] 19:12:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:01.102 19:12:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:01.102 19:12:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:01.102 19:12:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:01.103 19:12:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:01.103 19:12:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:01.103 19:12:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:01.103 19:12:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:01.103 19:12:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.103 19:12:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:01.103 19:12:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.103 [2024-11-27 19:12:10.731168] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:14:01.363 19:12:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:01.363 "name": "raid_bdev1", 00:14:01.363 "uuid": "0dc8566b-2187-440f-b159-de32d1a35838", 00:14:01.363 "strip_size_kb": 0, 00:14:01.363 "state": "online", 00:14:01.363 "raid_level": "raid1", 00:14:01.363 "superblock": false, 00:14:01.363 "num_base_bdevs": 2, 00:14:01.363 "num_base_bdevs_discovered": 2, 00:14:01.363 "num_base_bdevs_operational": 2, 00:14:01.363 "process": { 00:14:01.363 "type": "rebuild", 00:14:01.363 "target": "spare", 00:14:01.363 "progress": { 00:14:01.363 "blocks": 32768, 00:14:01.363 "percent": 50 00:14:01.363 } 00:14:01.363 }, 00:14:01.363 "base_bdevs_list": [ 00:14:01.363 { 00:14:01.363 "name": "spare", 00:14:01.363 "uuid": "19efcad7-b9f0-59df-8330-556bf5fb9427", 00:14:01.363 "is_configured": true, 00:14:01.363 "data_offset": 0, 00:14:01.363 "data_size": 65536 00:14:01.363 }, 00:14:01.363 { 00:14:01.363 "name": "BaseBdev2", 00:14:01.363 "uuid": "a2c10766-1e8f-5227-a0a8-74255a56e2cf", 00:14:01.363 "is_configured": true, 00:14:01.363 "data_offset": 0, 00:14:01.363 "data_size": 65536 00:14:01.363 } 00:14:01.363 ] 00:14:01.363 }' 00:14:01.363 19:12:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:01.363 19:12:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:01.363 19:12:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:01.363 19:12:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:01.363 19:12:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:01.623 [2024-11-27 19:12:11.083782] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:14:02.193 118.20 IOPS, 354.60 MiB/s [2024-11-27T19:12:11.829Z] [2024-11-27 19:12:11.533086] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:14:02.193 [2024-11-27 19:12:11.762030] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:14:02.452 19:12:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:02.453 19:12:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:02.453 19:12:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:02.453 19:12:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:02.453 19:12:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:02.453 19:12:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:02.453 19:12:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:02.453 19:12:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:02.453 19:12:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.453 19:12:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:02.453 19:12:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.453 19:12:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:02.453 "name": "raid_bdev1", 00:14:02.453 "uuid": "0dc8566b-2187-440f-b159-de32d1a35838", 00:14:02.453 "strip_size_kb": 0, 00:14:02.453 "state": "online", 00:14:02.453 "raid_level": "raid1", 00:14:02.453 "superblock": false, 00:14:02.453 "num_base_bdevs": 2, 00:14:02.453 "num_base_bdevs_discovered": 2, 00:14:02.453 "num_base_bdevs_operational": 2, 00:14:02.453 "process": { 00:14:02.453 "type": "rebuild", 00:14:02.453 "target": "spare", 00:14:02.453 "progress": { 00:14:02.453 "blocks": 51200, 00:14:02.453 "percent": 78 00:14:02.453 } 00:14:02.453 }, 00:14:02.453 "base_bdevs_list": [ 00:14:02.453 { 00:14:02.453 "name": "spare", 00:14:02.453 "uuid": "19efcad7-b9f0-59df-8330-556bf5fb9427", 00:14:02.453 "is_configured": true, 00:14:02.453 "data_offset": 0, 00:14:02.453 "data_size": 65536 00:14:02.453 }, 00:14:02.453 { 00:14:02.453 "name": "BaseBdev2", 00:14:02.453 "uuid": "a2c10766-1e8f-5227-a0a8-74255a56e2cf", 00:14:02.453 "is_configured": true, 00:14:02.453 "data_offset": 0, 00:14:02.453 "data_size": 65536 00:14:02.453 } 00:14:02.453 ] 00:14:02.453 }' 00:14:02.453 19:12:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:02.453 19:12:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:02.453 19:12:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:02.453 19:12:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:02.453 19:12:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:03.281 106.00 IOPS, 318.00 MiB/s [2024-11-27T19:12:12.917Z] [2024-11-27 19:12:12.630740] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:03.281 [2024-11-27 19:12:12.735492] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:03.282 [2024-11-27 19:12:12.737214] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:03.541 19:12:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:03.541 19:12:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:03.541 19:12:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:03.541 19:12:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:03.541 19:12:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:03.541 19:12:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:03.541 19:12:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:03.541 19:12:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.541 19:12:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:03.541 19:12:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:03.541 19:12:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.541 19:12:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:03.541 "name": "raid_bdev1", 00:14:03.541 "uuid": "0dc8566b-2187-440f-b159-de32d1a35838", 00:14:03.541 "strip_size_kb": 0, 00:14:03.541 "state": "online", 00:14:03.541 "raid_level": "raid1", 00:14:03.541 "superblock": false, 00:14:03.541 "num_base_bdevs": 2, 00:14:03.541 "num_base_bdevs_discovered": 2, 00:14:03.541 "num_base_bdevs_operational": 2, 00:14:03.541 "base_bdevs_list": [ 00:14:03.541 { 00:14:03.541 "name": "spare", 00:14:03.541 "uuid": "19efcad7-b9f0-59df-8330-556bf5fb9427", 00:14:03.541 "is_configured": true, 00:14:03.541 "data_offset": 0, 00:14:03.541 "data_size": 65536 00:14:03.541 }, 00:14:03.541 { 00:14:03.541 "name": "BaseBdev2", 00:14:03.541 "uuid": "a2c10766-1e8f-5227-a0a8-74255a56e2cf", 00:14:03.541 "is_configured": true, 00:14:03.541 "data_offset": 0, 00:14:03.541 "data_size": 65536 00:14:03.541 } 00:14:03.541 ] 00:14:03.541 }' 00:14:03.541 19:12:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:03.541 19:12:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:03.541 19:12:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:03.541 95.00 IOPS, 285.00 MiB/s [2024-11-27T19:12:13.177Z] 19:12:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:03.541 19:12:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:14:03.541 19:12:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:03.541 19:12:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:03.541 19:12:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:03.541 19:12:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:03.541 19:12:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:03.541 19:12:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:03.541 19:12:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:03.541 19:12:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.541 19:12:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:03.541 19:12:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.801 19:12:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:03.801 "name": "raid_bdev1", 00:14:03.801 "uuid": "0dc8566b-2187-440f-b159-de32d1a35838", 00:14:03.801 "strip_size_kb": 0, 00:14:03.801 "state": "online", 00:14:03.801 "raid_level": "raid1", 00:14:03.801 "superblock": false, 00:14:03.801 "num_base_bdevs": 2, 00:14:03.801 "num_base_bdevs_discovered": 2, 00:14:03.801 "num_base_bdevs_operational": 2, 00:14:03.801 "base_bdevs_list": [ 00:14:03.801 { 00:14:03.801 "name": "spare", 00:14:03.801 "uuid": "19efcad7-b9f0-59df-8330-556bf5fb9427", 00:14:03.801 "is_configured": true, 00:14:03.801 "data_offset": 0, 00:14:03.801 "data_size": 65536 00:14:03.801 }, 00:14:03.801 { 00:14:03.801 "name": "BaseBdev2", 00:14:03.801 "uuid": "a2c10766-1e8f-5227-a0a8-74255a56e2cf", 00:14:03.801 "is_configured": true, 00:14:03.801 "data_offset": 0, 00:14:03.801 "data_size": 65536 00:14:03.801 } 00:14:03.801 ] 00:14:03.801 }' 00:14:03.801 19:12:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:03.801 19:12:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:03.801 19:12:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:03.801 19:12:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:03.801 19:12:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:03.801 19:12:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:03.801 19:12:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:03.801 19:12:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:03.801 19:12:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:03.801 19:12:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:03.801 19:12:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:03.801 19:12:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:03.801 19:12:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:03.801 19:12:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:03.801 19:12:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:03.801 19:12:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:03.801 19:12:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.801 19:12:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:03.801 19:12:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.801 19:12:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:03.801 "name": "raid_bdev1", 00:14:03.801 "uuid": "0dc8566b-2187-440f-b159-de32d1a35838", 00:14:03.801 "strip_size_kb": 0, 00:14:03.801 "state": "online", 00:14:03.801 "raid_level": "raid1", 00:14:03.801 "superblock": false, 00:14:03.801 "num_base_bdevs": 2, 00:14:03.802 "num_base_bdevs_discovered": 2, 00:14:03.802 "num_base_bdevs_operational": 2, 00:14:03.802 "base_bdevs_list": [ 00:14:03.802 { 00:14:03.802 "name": "spare", 00:14:03.802 "uuid": "19efcad7-b9f0-59df-8330-556bf5fb9427", 00:14:03.802 "is_configured": true, 00:14:03.802 "data_offset": 0, 00:14:03.802 "data_size": 65536 00:14:03.802 }, 00:14:03.802 { 00:14:03.802 "name": "BaseBdev2", 00:14:03.802 "uuid": "a2c10766-1e8f-5227-a0a8-74255a56e2cf", 00:14:03.802 "is_configured": true, 00:14:03.802 "data_offset": 0, 00:14:03.802 "data_size": 65536 00:14:03.802 } 00:14:03.802 ] 00:14:03.802 }' 00:14:03.802 19:12:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:03.802 19:12:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:04.370 19:12:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:04.370 19:12:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.370 19:12:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:04.370 [2024-11-27 19:12:13.729106] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:04.370 [2024-11-27 19:12:13.729145] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:04.370 00:14:04.370 Latency(us) 00:14:04.370 [2024-11-27T19:12:14.006Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:04.370 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:14:04.370 raid_bdev1 : 7.71 89.75 269.25 0.00 0.00 16159.76 318.38 113099.68 00:14:04.370 [2024-11-27T19:12:14.006Z] =================================================================================================================== 00:14:04.370 [2024-11-27T19:12:14.006Z] Total : 89.75 269.25 0.00 0.00 16159.76 318.38 113099.68 00:14:04.370 [2024-11-27 19:12:13.826024] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:04.370 [2024-11-27 19:12:13.826080] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:04.370 [2024-11-27 19:12:13.826172] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:04.371 [2024-11-27 19:12:13.826185] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:04.371 { 00:14:04.371 "results": [ 00:14:04.371 { 00:14:04.371 "job": "raid_bdev1", 00:14:04.371 "core_mask": "0x1", 00:14:04.371 "workload": "randrw", 00:14:04.371 "percentage": 50, 00:14:04.371 "status": "finished", 00:14:04.371 "queue_depth": 2, 00:14:04.371 "io_size": 3145728, 00:14:04.371 "runtime": 7.7103, 00:14:04.371 "iops": 89.75007457556775, 00:14:04.371 "mibps": 269.25022372670327, 00:14:04.371 "io_failed": 0, 00:14:04.371 "io_timeout": 0, 00:14:04.371 "avg_latency_us": 16159.76139536058, 00:14:04.371 "min_latency_us": 318.37903930131006, 00:14:04.371 "max_latency_us": 113099.68209606987 00:14:04.371 } 00:14:04.371 ], 00:14:04.371 "core_count": 1 00:14:04.371 } 00:14:04.371 19:12:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.371 19:12:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:04.371 19:12:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:14:04.371 19:12:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.371 19:12:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:04.371 19:12:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.371 19:12:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:04.371 19:12:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:04.371 19:12:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:14:04.371 19:12:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:14:04.371 19:12:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:04.371 19:12:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:14:04.371 19:12:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:04.371 19:12:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:04.371 19:12:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:04.371 19:12:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:14:04.371 19:12:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:04.371 19:12:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:04.371 19:12:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:14:04.631 /dev/nbd0 00:14:04.631 19:12:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:04.631 19:12:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:04.631 19:12:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:04.631 19:12:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:14:04.631 19:12:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:04.631 19:12:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:04.631 19:12:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:04.631 19:12:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:14:04.631 19:12:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:04.631 19:12:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:04.631 19:12:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:04.631 1+0 records in 00:14:04.631 1+0 records out 00:14:04.631 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000336771 s, 12.2 MB/s 00:14:04.631 19:12:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:04.631 19:12:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:14:04.631 19:12:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:04.631 19:12:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:04.631 19:12:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:14:04.631 19:12:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:04.631 19:12:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:04.631 19:12:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:04.631 19:12:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:14:04.631 19:12:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:14:04.631 19:12:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:04.631 19:12:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:14:04.631 19:12:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:04.631 19:12:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:14:04.631 19:12:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:04.631 19:12:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:14:04.631 19:12:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:04.631 19:12:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:04.631 19:12:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:14:04.891 /dev/nbd1 00:14:04.891 19:12:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:04.891 19:12:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:04.891 19:12:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:04.891 19:12:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:14:04.891 19:12:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:04.891 19:12:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:04.891 19:12:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:04.891 19:12:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:14:04.891 19:12:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:04.891 19:12:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:04.891 19:12:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:04.891 1+0 records in 00:14:04.891 1+0 records out 00:14:04.891 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000550063 s, 7.4 MB/s 00:14:04.891 19:12:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:04.891 19:12:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:14:04.891 19:12:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:04.891 19:12:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:04.891 19:12:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:14:04.891 19:12:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:04.891 19:12:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:04.891 19:12:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:14:05.151 19:12:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:14:05.151 19:12:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:05.151 19:12:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:14:05.151 19:12:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:05.151 19:12:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:14:05.151 19:12:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:05.151 19:12:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:05.411 19:12:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:05.411 19:12:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:05.411 19:12:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:05.411 19:12:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:05.411 19:12:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:05.411 19:12:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:05.411 19:12:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:14:05.411 19:12:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:05.411 19:12:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:05.411 19:12:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:05.412 19:12:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:05.412 19:12:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:05.412 19:12:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:14:05.412 19:12:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:05.412 19:12:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:05.412 19:12:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:05.412 19:12:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:05.412 19:12:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:05.412 19:12:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:05.412 19:12:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:05.412 19:12:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:05.671 19:12:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:14:05.671 19:12:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:05.671 19:12:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:14:05.671 19:12:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 76576 00:14:05.671 19:12:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # '[' -z 76576 ']' 00:14:05.671 19:12:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # kill -0 76576 00:14:05.671 19:12:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # uname 00:14:05.671 19:12:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:05.672 19:12:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76576 00:14:05.672 19:12:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:05.672 19:12:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:05.672 19:12:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76576' 00:14:05.672 killing process with pid 76576 00:14:05.672 Received shutdown signal, test time was about 8.999262 seconds 00:14:05.672 00:14:05.672 Latency(us) 00:14:05.672 [2024-11-27T19:12:15.308Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:05.672 [2024-11-27T19:12:15.308Z] =================================================================================================================== 00:14:05.672 [2024-11-27T19:12:15.308Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:05.672 19:12:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@973 -- # kill 76576 00:14:05.672 [2024-11-27 19:12:15.091358] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:05.672 19:12:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@978 -- # wait 76576 00:14:05.931 [2024-11-27 19:12:15.309074] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:06.870 19:12:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:14:06.870 00:14:06.870 real 0m12.118s 00:14:06.870 user 0m15.261s 00:14:06.870 sys 0m1.563s 00:14:06.870 ************************************ 00:14:06.870 END TEST raid_rebuild_test_io 00:14:06.870 ************************************ 00:14:06.870 19:12:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:06.870 19:12:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:07.131 19:12:16 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true true 00:14:07.131 19:12:16 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:14:07.131 19:12:16 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:07.131 19:12:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:07.131 ************************************ 00:14:07.131 START TEST raid_rebuild_test_sb_io 00:14:07.131 ************************************ 00:14:07.131 19:12:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true true true 00:14:07.131 19:12:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:14:07.131 19:12:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:14:07.131 19:12:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:14:07.131 19:12:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:14:07.131 19:12:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:07.131 19:12:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:07.131 19:12:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:07.131 19:12:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:07.131 19:12:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:07.131 19:12:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:07.131 19:12:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:07.131 19:12:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:07.131 19:12:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:07.131 19:12:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:14:07.131 19:12:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:07.131 19:12:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:07.131 19:12:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:07.131 19:12:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:07.131 19:12:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:07.131 19:12:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:07.131 19:12:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:14:07.131 19:12:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:14:07.131 19:12:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:14:07.131 19:12:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:14:07.131 19:12:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=76952 00:14:07.131 19:12:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:07.131 19:12:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 76952 00:14:07.131 19:12:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # '[' -z 76952 ']' 00:14:07.131 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:07.131 19:12:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:07.131 19:12:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:07.131 19:12:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:07.131 19:12:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:07.131 19:12:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:07.131 [2024-11-27 19:12:16.648904] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:14:07.131 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:07.131 Zero copy mechanism will not be used. 00:14:07.131 [2024-11-27 19:12:16.649125] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76952 ] 00:14:07.391 [2024-11-27 19:12:16.828438] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:07.391 [2024-11-27 19:12:16.940706] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:07.652 [2024-11-27 19:12:17.135441] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:07.652 [2024-11-27 19:12:17.135474] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:07.913 19:12:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:07.913 19:12:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # return 0 00:14:07.913 19:12:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:07.913 19:12:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:07.913 19:12:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.913 19:12:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:07.913 BaseBdev1_malloc 00:14:07.913 19:12:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.913 19:12:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:07.913 19:12:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.913 19:12:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:07.913 [2024-11-27 19:12:17.519929] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:07.913 [2024-11-27 19:12:17.520036] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:07.913 [2024-11-27 19:12:17.520066] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:07.913 [2024-11-27 19:12:17.520078] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:07.913 [2024-11-27 19:12:17.522273] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:07.913 [2024-11-27 19:12:17.522342] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:07.913 BaseBdev1 00:14:07.913 19:12:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.913 19:12:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:07.913 19:12:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:07.913 19:12:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.913 19:12:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:08.176 BaseBdev2_malloc 00:14:08.176 19:12:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.176 19:12:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:08.176 19:12:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.176 19:12:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:08.177 [2024-11-27 19:12:17.575275] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:08.177 [2024-11-27 19:12:17.575336] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:08.177 [2024-11-27 19:12:17.575359] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:08.177 [2024-11-27 19:12:17.575370] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:08.177 [2024-11-27 19:12:17.577466] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:08.177 [2024-11-27 19:12:17.577556] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:08.177 BaseBdev2 00:14:08.177 19:12:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.177 19:12:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:08.177 19:12:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.177 19:12:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:08.177 spare_malloc 00:14:08.177 19:12:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.177 19:12:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:08.177 19:12:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.177 19:12:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:08.177 spare_delay 00:14:08.177 19:12:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.177 19:12:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:08.177 19:12:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.177 19:12:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:08.177 [2024-11-27 19:12:17.676589] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:08.177 [2024-11-27 19:12:17.676652] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:08.177 [2024-11-27 19:12:17.676672] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:14:08.177 [2024-11-27 19:12:17.676684] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:08.177 [2024-11-27 19:12:17.678839] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:08.177 [2024-11-27 19:12:17.678934] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:08.177 spare 00:14:08.177 19:12:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.177 19:12:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:14:08.177 19:12:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.177 19:12:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:08.177 [2024-11-27 19:12:17.688628] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:08.177 [2024-11-27 19:12:17.690446] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:08.177 [2024-11-27 19:12:17.690615] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:08.177 [2024-11-27 19:12:17.690630] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:08.177 [2024-11-27 19:12:17.690891] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:08.177 [2024-11-27 19:12:17.691058] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:08.177 [2024-11-27 19:12:17.691074] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:08.177 [2024-11-27 19:12:17.691217] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:08.177 19:12:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.177 19:12:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:08.177 19:12:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:08.177 19:12:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:08.177 19:12:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:08.177 19:12:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:08.177 19:12:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:08.177 19:12:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:08.177 19:12:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:08.177 19:12:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:08.177 19:12:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:08.177 19:12:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:08.177 19:12:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:08.177 19:12:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.177 19:12:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:08.177 19:12:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.177 19:12:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:08.177 "name": "raid_bdev1", 00:14:08.177 "uuid": "eb7d4e9b-f788-4d20-a642-2942c1b54db6", 00:14:08.177 "strip_size_kb": 0, 00:14:08.177 "state": "online", 00:14:08.177 "raid_level": "raid1", 00:14:08.177 "superblock": true, 00:14:08.177 "num_base_bdevs": 2, 00:14:08.177 "num_base_bdevs_discovered": 2, 00:14:08.177 "num_base_bdevs_operational": 2, 00:14:08.177 "base_bdevs_list": [ 00:14:08.177 { 00:14:08.177 "name": "BaseBdev1", 00:14:08.177 "uuid": "b6541c09-cbd3-528a-8a7c-69086de9a0c7", 00:14:08.177 "is_configured": true, 00:14:08.177 "data_offset": 2048, 00:14:08.177 "data_size": 63488 00:14:08.177 }, 00:14:08.177 { 00:14:08.177 "name": "BaseBdev2", 00:14:08.177 "uuid": "a1bdd393-0e2e-55d3-a463-a3172ed003bb", 00:14:08.177 "is_configured": true, 00:14:08.177 "data_offset": 2048, 00:14:08.177 "data_size": 63488 00:14:08.177 } 00:14:08.177 ] 00:14:08.177 }' 00:14:08.177 19:12:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:08.177 19:12:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:08.755 19:12:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:08.755 19:12:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.755 19:12:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:08.755 19:12:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:08.755 [2024-11-27 19:12:18.164111] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:08.755 19:12:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.755 19:12:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:14:08.755 19:12:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:08.755 19:12:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.755 19:12:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:08.755 19:12:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:08.755 19:12:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.755 19:12:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:14:08.755 19:12:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:14:08.755 19:12:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:08.755 19:12:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:08.755 19:12:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.755 19:12:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:08.755 [2024-11-27 19:12:18.267652] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:08.755 19:12:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.755 19:12:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:08.755 19:12:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:08.755 19:12:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:08.755 19:12:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:08.755 19:12:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:08.755 19:12:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:08.755 19:12:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:08.755 19:12:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:08.755 19:12:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:08.755 19:12:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:08.755 19:12:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:08.755 19:12:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.755 19:12:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:08.755 19:12:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:08.755 19:12:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.755 19:12:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:08.755 "name": "raid_bdev1", 00:14:08.755 "uuid": "eb7d4e9b-f788-4d20-a642-2942c1b54db6", 00:14:08.755 "strip_size_kb": 0, 00:14:08.755 "state": "online", 00:14:08.755 "raid_level": "raid1", 00:14:08.755 "superblock": true, 00:14:08.755 "num_base_bdevs": 2, 00:14:08.755 "num_base_bdevs_discovered": 1, 00:14:08.755 "num_base_bdevs_operational": 1, 00:14:08.755 "base_bdevs_list": [ 00:14:08.755 { 00:14:08.755 "name": null, 00:14:08.755 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:08.755 "is_configured": false, 00:14:08.755 "data_offset": 0, 00:14:08.755 "data_size": 63488 00:14:08.755 }, 00:14:08.755 { 00:14:08.755 "name": "BaseBdev2", 00:14:08.755 "uuid": "a1bdd393-0e2e-55d3-a463-a3172ed003bb", 00:14:08.755 "is_configured": true, 00:14:08.755 "data_offset": 2048, 00:14:08.755 "data_size": 63488 00:14:08.755 } 00:14:08.755 ] 00:14:08.755 }' 00:14:08.755 19:12:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:08.755 19:12:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:08.755 [2024-11-27 19:12:18.367151] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:14:08.755 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:08.755 Zero copy mechanism will not be used. 00:14:08.755 Running I/O for 60 seconds... 00:14:09.326 19:12:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:09.326 19:12:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.326 19:12:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:09.326 [2024-11-27 19:12:18.687185] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:09.326 19:12:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.326 19:12:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:09.326 [2024-11-27 19:12:18.735488] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:14:09.326 [2024-11-27 19:12:18.737307] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:09.326 [2024-11-27 19:12:18.850162] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:09.326 [2024-11-27 19:12:18.850652] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:09.586 [2024-11-27 19:12:18.967628] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:09.586 [2024-11-27 19:12:18.967868] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:09.844 173.00 IOPS, 519.00 MiB/s [2024-11-27T19:12:19.480Z] [2024-11-27 19:12:19.424757] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:09.844 [2024-11-27 19:12:19.425032] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:10.103 19:12:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:10.103 19:12:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:10.103 19:12:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:10.103 19:12:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:10.103 19:12:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:10.103 19:12:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:10.103 19:12:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.103 19:12:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:10.103 19:12:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:10.362 [2024-11-27 19:12:19.747561] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:10.362 [2024-11-27 19:12:19.748035] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:10.362 19:12:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.362 19:12:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:10.362 "name": "raid_bdev1", 00:14:10.362 "uuid": "eb7d4e9b-f788-4d20-a642-2942c1b54db6", 00:14:10.362 "strip_size_kb": 0, 00:14:10.362 "state": "online", 00:14:10.362 "raid_level": "raid1", 00:14:10.362 "superblock": true, 00:14:10.362 "num_base_bdevs": 2, 00:14:10.362 "num_base_bdevs_discovered": 2, 00:14:10.362 "num_base_bdevs_operational": 2, 00:14:10.362 "process": { 00:14:10.362 "type": "rebuild", 00:14:10.362 "target": "spare", 00:14:10.362 "progress": { 00:14:10.362 "blocks": 12288, 00:14:10.362 "percent": 19 00:14:10.362 } 00:14:10.362 }, 00:14:10.362 "base_bdevs_list": [ 00:14:10.362 { 00:14:10.362 "name": "spare", 00:14:10.362 "uuid": "dfe09b1a-4da0-5fd3-9f48-b6a1c7c8ed9d", 00:14:10.362 "is_configured": true, 00:14:10.362 "data_offset": 2048, 00:14:10.362 "data_size": 63488 00:14:10.362 }, 00:14:10.362 { 00:14:10.362 "name": "BaseBdev2", 00:14:10.362 "uuid": "a1bdd393-0e2e-55d3-a463-a3172ed003bb", 00:14:10.362 "is_configured": true, 00:14:10.362 "data_offset": 2048, 00:14:10.362 "data_size": 63488 00:14:10.362 } 00:14:10.362 ] 00:14:10.362 }' 00:14:10.362 19:12:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:10.362 19:12:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:10.362 19:12:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:10.362 [2024-11-27 19:12:19.868128] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:14:10.362 19:12:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:10.362 19:12:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:10.362 19:12:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.362 19:12:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:10.362 [2024-11-27 19:12:19.875085] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:10.362 [2024-11-27 19:12:19.979959] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:10.363 [2024-11-27 19:12:19.982275] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:10.363 [2024-11-27 19:12:19.982345] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:10.363 [2024-11-27 19:12:19.982371] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:10.623 [2024-11-27 19:12:20.017301] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:14:10.623 19:12:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.623 19:12:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:10.623 19:12:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:10.623 19:12:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:10.623 19:12:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:10.623 19:12:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:10.623 19:12:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:10.623 19:12:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:10.623 19:12:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:10.623 19:12:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:10.623 19:12:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:10.623 19:12:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:10.623 19:12:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:10.623 19:12:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.623 19:12:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:10.623 19:12:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.623 19:12:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:10.623 "name": "raid_bdev1", 00:14:10.623 "uuid": "eb7d4e9b-f788-4d20-a642-2942c1b54db6", 00:14:10.623 "strip_size_kb": 0, 00:14:10.623 "state": "online", 00:14:10.623 "raid_level": "raid1", 00:14:10.623 "superblock": true, 00:14:10.623 "num_base_bdevs": 2, 00:14:10.623 "num_base_bdevs_discovered": 1, 00:14:10.623 "num_base_bdevs_operational": 1, 00:14:10.623 "base_bdevs_list": [ 00:14:10.623 { 00:14:10.623 "name": null, 00:14:10.623 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:10.623 "is_configured": false, 00:14:10.623 "data_offset": 0, 00:14:10.623 "data_size": 63488 00:14:10.623 }, 00:14:10.623 { 00:14:10.623 "name": "BaseBdev2", 00:14:10.623 "uuid": "a1bdd393-0e2e-55d3-a463-a3172ed003bb", 00:14:10.623 "is_configured": true, 00:14:10.623 "data_offset": 2048, 00:14:10.623 "data_size": 63488 00:14:10.623 } 00:14:10.623 ] 00:14:10.623 }' 00:14:10.623 19:12:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:10.623 19:12:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:10.883 167.00 IOPS, 501.00 MiB/s [2024-11-27T19:12:20.519Z] 19:12:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:10.883 19:12:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:10.883 19:12:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:10.883 19:12:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:10.883 19:12:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:10.883 19:12:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:10.883 19:12:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:10.883 19:12:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.883 19:12:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:10.883 19:12:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.883 19:12:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:10.883 "name": "raid_bdev1", 00:14:10.883 "uuid": "eb7d4e9b-f788-4d20-a642-2942c1b54db6", 00:14:10.883 "strip_size_kb": 0, 00:14:10.883 "state": "online", 00:14:10.883 "raid_level": "raid1", 00:14:10.883 "superblock": true, 00:14:10.883 "num_base_bdevs": 2, 00:14:10.883 "num_base_bdevs_discovered": 1, 00:14:10.883 "num_base_bdevs_operational": 1, 00:14:10.883 "base_bdevs_list": [ 00:14:10.883 { 00:14:10.883 "name": null, 00:14:10.883 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:10.883 "is_configured": false, 00:14:10.883 "data_offset": 0, 00:14:10.883 "data_size": 63488 00:14:10.883 }, 00:14:10.883 { 00:14:10.883 "name": "BaseBdev2", 00:14:10.883 "uuid": "a1bdd393-0e2e-55d3-a463-a3172ed003bb", 00:14:10.883 "is_configured": true, 00:14:10.883 "data_offset": 2048, 00:14:10.883 "data_size": 63488 00:14:10.883 } 00:14:10.883 ] 00:14:10.883 }' 00:14:10.883 19:12:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:11.143 19:12:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:11.143 19:12:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:11.143 19:12:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:11.143 19:12:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:11.143 19:12:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.143 19:12:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:11.143 [2024-11-27 19:12:20.580997] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:11.143 19:12:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.143 19:12:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:11.143 [2024-11-27 19:12:20.634021] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:11.143 [2024-11-27 19:12:20.635892] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:11.143 [2024-11-27 19:12:20.748798] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:11.143 [2024-11-27 19:12:20.749323] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:11.403 [2024-11-27 19:12:20.956257] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:11.403 [2024-11-27 19:12:20.956494] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:11.663 [2024-11-27 19:12:21.279860] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:11.923 167.67 IOPS, 503.00 MiB/s [2024-11-27T19:12:21.559Z] [2024-11-27 19:12:21.501133] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:12.184 19:12:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:12.184 19:12:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:12.184 19:12:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:12.184 19:12:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:12.184 19:12:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:12.184 19:12:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:12.184 19:12:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.184 19:12:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:12.184 19:12:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:12.184 19:12:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.184 19:12:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:12.184 "name": "raid_bdev1", 00:14:12.184 "uuid": "eb7d4e9b-f788-4d20-a642-2942c1b54db6", 00:14:12.184 "strip_size_kb": 0, 00:14:12.184 "state": "online", 00:14:12.184 "raid_level": "raid1", 00:14:12.184 "superblock": true, 00:14:12.184 "num_base_bdevs": 2, 00:14:12.184 "num_base_bdevs_discovered": 2, 00:14:12.184 "num_base_bdevs_operational": 2, 00:14:12.184 "process": { 00:14:12.184 "type": "rebuild", 00:14:12.184 "target": "spare", 00:14:12.184 "progress": { 00:14:12.184 "blocks": 12288, 00:14:12.184 "percent": 19 00:14:12.184 } 00:14:12.184 }, 00:14:12.184 "base_bdevs_list": [ 00:14:12.184 { 00:14:12.184 "name": "spare", 00:14:12.184 "uuid": "dfe09b1a-4da0-5fd3-9f48-b6a1c7c8ed9d", 00:14:12.184 "is_configured": true, 00:14:12.184 "data_offset": 2048, 00:14:12.184 "data_size": 63488 00:14:12.184 }, 00:14:12.184 { 00:14:12.184 "name": "BaseBdev2", 00:14:12.184 "uuid": "a1bdd393-0e2e-55d3-a463-a3172ed003bb", 00:14:12.184 "is_configured": true, 00:14:12.184 "data_offset": 2048, 00:14:12.184 "data_size": 63488 00:14:12.184 } 00:14:12.184 ] 00:14:12.184 }' 00:14:12.184 19:12:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:12.184 19:12:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:12.184 19:12:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:12.184 [2024-11-27 19:12:21.728000] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:12.184 [2024-11-27 19:12:21.728607] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:12.184 19:12:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:12.184 19:12:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:14:12.184 19:12:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:14:12.184 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:14:12.184 19:12:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:14:12.184 19:12:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:14:12.184 19:12:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:14:12.184 19:12:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=423 00:14:12.184 19:12:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:12.184 19:12:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:12.184 19:12:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:12.184 19:12:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:12.184 19:12:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:12.184 19:12:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:12.184 19:12:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:12.184 19:12:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:12.184 19:12:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.184 19:12:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:12.184 19:12:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.445 19:12:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:12.445 "name": "raid_bdev1", 00:14:12.445 "uuid": "eb7d4e9b-f788-4d20-a642-2942c1b54db6", 00:14:12.445 "strip_size_kb": 0, 00:14:12.445 "state": "online", 00:14:12.445 "raid_level": "raid1", 00:14:12.445 "superblock": true, 00:14:12.445 "num_base_bdevs": 2, 00:14:12.445 "num_base_bdevs_discovered": 2, 00:14:12.445 "num_base_bdevs_operational": 2, 00:14:12.445 "process": { 00:14:12.445 "type": "rebuild", 00:14:12.445 "target": "spare", 00:14:12.445 "progress": { 00:14:12.445 "blocks": 14336, 00:14:12.445 "percent": 22 00:14:12.445 } 00:14:12.445 }, 00:14:12.445 "base_bdevs_list": [ 00:14:12.445 { 00:14:12.445 "name": "spare", 00:14:12.445 "uuid": "dfe09b1a-4da0-5fd3-9f48-b6a1c7c8ed9d", 00:14:12.445 "is_configured": true, 00:14:12.445 "data_offset": 2048, 00:14:12.445 "data_size": 63488 00:14:12.445 }, 00:14:12.445 { 00:14:12.445 "name": "BaseBdev2", 00:14:12.445 "uuid": "a1bdd393-0e2e-55d3-a463-a3172ed003bb", 00:14:12.445 "is_configured": true, 00:14:12.445 "data_offset": 2048, 00:14:12.445 "data_size": 63488 00:14:12.445 } 00:14:12.445 ] 00:14:12.445 }' 00:14:12.445 19:12:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:12.445 19:12:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:12.445 19:12:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:12.445 19:12:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:12.445 19:12:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:12.445 [2024-11-27 19:12:21.966224] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:14:12.705 [2024-11-27 19:12:22.282069] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:14:12.705 [2024-11-27 19:12:22.282721] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:14:12.966 140.75 IOPS, 422.25 MiB/s [2024-11-27T19:12:22.602Z] [2024-11-27 19:12:22.499891] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:14:13.226 [2024-11-27 19:12:22.834502] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:14:13.226 [2024-11-27 19:12:22.834891] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:14:13.487 19:12:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:13.487 19:12:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:13.487 19:12:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:13.487 19:12:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:13.487 19:12:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:13.487 19:12:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:13.487 19:12:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:13.487 19:12:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:13.487 19:12:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.487 19:12:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:13.487 19:12:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.487 19:12:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:13.487 "name": "raid_bdev1", 00:14:13.487 "uuid": "eb7d4e9b-f788-4d20-a642-2942c1b54db6", 00:14:13.487 "strip_size_kb": 0, 00:14:13.487 "state": "online", 00:14:13.487 "raid_level": "raid1", 00:14:13.487 "superblock": true, 00:14:13.487 "num_base_bdevs": 2, 00:14:13.487 "num_base_bdevs_discovered": 2, 00:14:13.487 "num_base_bdevs_operational": 2, 00:14:13.487 "process": { 00:14:13.487 "type": "rebuild", 00:14:13.487 "target": "spare", 00:14:13.487 "progress": { 00:14:13.487 "blocks": 28672, 00:14:13.487 "percent": 45 00:14:13.487 } 00:14:13.487 }, 00:14:13.487 "base_bdevs_list": [ 00:14:13.487 { 00:14:13.487 "name": "spare", 00:14:13.487 "uuid": "dfe09b1a-4da0-5fd3-9f48-b6a1c7c8ed9d", 00:14:13.487 "is_configured": true, 00:14:13.487 "data_offset": 2048, 00:14:13.487 "data_size": 63488 00:14:13.487 }, 00:14:13.487 { 00:14:13.487 "name": "BaseBdev2", 00:14:13.487 "uuid": "a1bdd393-0e2e-55d3-a463-a3172ed003bb", 00:14:13.487 "is_configured": true, 00:14:13.487 "data_offset": 2048, 00:14:13.487 "data_size": 63488 00:14:13.487 } 00:14:13.487 ] 00:14:13.487 }' 00:14:13.487 19:12:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:13.487 19:12:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:13.487 19:12:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:13.487 19:12:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:13.487 19:12:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:13.747 [2024-11-27 19:12:23.301076] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:14:13.747 [2024-11-27 19:12:23.301339] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:14:14.317 122.20 IOPS, 366.60 MiB/s [2024-11-27T19:12:23.953Z] [2024-11-27 19:12:23.669015] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:14:14.577 [2024-11-27 19:12:24.000402] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:14:14.577 19:12:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:14.577 19:12:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:14.577 19:12:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:14.577 19:12:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:14.577 19:12:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:14.577 19:12:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:14.577 19:12:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:14.577 19:12:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:14.577 19:12:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.577 19:12:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:14.577 19:12:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.577 19:12:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:14.577 "name": "raid_bdev1", 00:14:14.577 "uuid": "eb7d4e9b-f788-4d20-a642-2942c1b54db6", 00:14:14.577 "strip_size_kb": 0, 00:14:14.577 "state": "online", 00:14:14.577 "raid_level": "raid1", 00:14:14.577 "superblock": true, 00:14:14.577 "num_base_bdevs": 2, 00:14:14.577 "num_base_bdevs_discovered": 2, 00:14:14.577 "num_base_bdevs_operational": 2, 00:14:14.577 "process": { 00:14:14.577 "type": "rebuild", 00:14:14.577 "target": "spare", 00:14:14.577 "progress": { 00:14:14.577 "blocks": 47104, 00:14:14.577 "percent": 74 00:14:14.577 } 00:14:14.577 }, 00:14:14.577 "base_bdevs_list": [ 00:14:14.577 { 00:14:14.577 "name": "spare", 00:14:14.577 "uuid": "dfe09b1a-4da0-5fd3-9f48-b6a1c7c8ed9d", 00:14:14.577 "is_configured": true, 00:14:14.577 "data_offset": 2048, 00:14:14.577 "data_size": 63488 00:14:14.577 }, 00:14:14.577 { 00:14:14.577 "name": "BaseBdev2", 00:14:14.577 "uuid": "a1bdd393-0e2e-55d3-a463-a3172ed003bb", 00:14:14.577 "is_configured": true, 00:14:14.577 "data_offset": 2048, 00:14:14.577 "data_size": 63488 00:14:14.577 } 00:14:14.577 ] 00:14:14.577 }' 00:14:14.577 19:12:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:14.577 19:12:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:14.577 19:12:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:14.577 19:12:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:14.577 19:12:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:14.837 [2024-11-27 19:12:24.324157] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:14:15.408 109.00 IOPS, 327.00 MiB/s [2024-11-27T19:12:25.044Z] [2024-11-27 19:12:24.973511] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:15.668 [2024-11-27 19:12:25.073443] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:15.668 [2024-11-27 19:12:25.075294] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:15.668 19:12:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:15.668 19:12:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:15.668 19:12:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:15.668 19:12:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:15.668 19:12:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:15.668 19:12:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:15.668 19:12:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:15.668 19:12:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:15.668 19:12:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.668 19:12:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:15.668 19:12:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.668 19:12:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:15.668 "name": "raid_bdev1", 00:14:15.668 "uuid": "eb7d4e9b-f788-4d20-a642-2942c1b54db6", 00:14:15.668 "strip_size_kb": 0, 00:14:15.668 "state": "online", 00:14:15.668 "raid_level": "raid1", 00:14:15.668 "superblock": true, 00:14:15.668 "num_base_bdevs": 2, 00:14:15.668 "num_base_bdevs_discovered": 2, 00:14:15.668 "num_base_bdevs_operational": 2, 00:14:15.668 "base_bdevs_list": [ 00:14:15.668 { 00:14:15.668 "name": "spare", 00:14:15.668 "uuid": "dfe09b1a-4da0-5fd3-9f48-b6a1c7c8ed9d", 00:14:15.668 "is_configured": true, 00:14:15.668 "data_offset": 2048, 00:14:15.668 "data_size": 63488 00:14:15.668 }, 00:14:15.668 { 00:14:15.668 "name": "BaseBdev2", 00:14:15.668 "uuid": "a1bdd393-0e2e-55d3-a463-a3172ed003bb", 00:14:15.668 "is_configured": true, 00:14:15.668 "data_offset": 2048, 00:14:15.668 "data_size": 63488 00:14:15.668 } 00:14:15.668 ] 00:14:15.668 }' 00:14:15.668 19:12:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:15.669 19:12:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:15.669 19:12:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:15.669 19:12:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:15.669 19:12:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:14:15.669 19:12:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:15.669 19:12:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:15.669 19:12:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:15.669 19:12:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:15.669 19:12:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:15.929 19:12:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:15.929 19:12:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:15.929 19:12:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.929 19:12:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:15.929 19:12:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.929 19:12:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:15.929 "name": "raid_bdev1", 00:14:15.929 "uuid": "eb7d4e9b-f788-4d20-a642-2942c1b54db6", 00:14:15.929 "strip_size_kb": 0, 00:14:15.929 "state": "online", 00:14:15.929 "raid_level": "raid1", 00:14:15.929 "superblock": true, 00:14:15.929 "num_base_bdevs": 2, 00:14:15.929 "num_base_bdevs_discovered": 2, 00:14:15.930 "num_base_bdevs_operational": 2, 00:14:15.930 "base_bdevs_list": [ 00:14:15.930 { 00:14:15.930 "name": "spare", 00:14:15.930 "uuid": "dfe09b1a-4da0-5fd3-9f48-b6a1c7c8ed9d", 00:14:15.930 "is_configured": true, 00:14:15.930 "data_offset": 2048, 00:14:15.930 "data_size": 63488 00:14:15.930 }, 00:14:15.930 { 00:14:15.930 "name": "BaseBdev2", 00:14:15.930 "uuid": "a1bdd393-0e2e-55d3-a463-a3172ed003bb", 00:14:15.930 "is_configured": true, 00:14:15.930 "data_offset": 2048, 00:14:15.930 "data_size": 63488 00:14:15.930 } 00:14:15.930 ] 00:14:15.930 }' 00:14:15.930 19:12:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:15.930 98.57 IOPS, 295.71 MiB/s [2024-11-27T19:12:25.566Z] 19:12:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:15.930 19:12:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:15.930 19:12:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:15.930 19:12:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:15.930 19:12:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:15.930 19:12:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:15.930 19:12:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:15.930 19:12:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:15.930 19:12:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:15.930 19:12:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:15.930 19:12:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:15.930 19:12:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:15.930 19:12:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:15.930 19:12:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:15.930 19:12:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:15.930 19:12:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.930 19:12:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:15.930 19:12:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.930 19:12:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:15.930 "name": "raid_bdev1", 00:14:15.930 "uuid": "eb7d4e9b-f788-4d20-a642-2942c1b54db6", 00:14:15.930 "strip_size_kb": 0, 00:14:15.930 "state": "online", 00:14:15.930 "raid_level": "raid1", 00:14:15.930 "superblock": true, 00:14:15.930 "num_base_bdevs": 2, 00:14:15.930 "num_base_bdevs_discovered": 2, 00:14:15.930 "num_base_bdevs_operational": 2, 00:14:15.930 "base_bdevs_list": [ 00:14:15.930 { 00:14:15.930 "name": "spare", 00:14:15.930 "uuid": "dfe09b1a-4da0-5fd3-9f48-b6a1c7c8ed9d", 00:14:15.930 "is_configured": true, 00:14:15.930 "data_offset": 2048, 00:14:15.930 "data_size": 63488 00:14:15.930 }, 00:14:15.930 { 00:14:15.930 "name": "BaseBdev2", 00:14:15.930 "uuid": "a1bdd393-0e2e-55d3-a463-a3172ed003bb", 00:14:15.930 "is_configured": true, 00:14:15.930 "data_offset": 2048, 00:14:15.930 "data_size": 63488 00:14:15.930 } 00:14:15.930 ] 00:14:15.930 }' 00:14:15.930 19:12:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:15.930 19:12:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:16.282 19:12:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:16.282 19:12:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.282 19:12:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:16.282 [2024-11-27 19:12:25.848920] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:16.282 [2024-11-27 19:12:25.849005] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:16.282 00:14:16.282 Latency(us) 00:14:16.282 [2024-11-27T19:12:25.918Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:16.282 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:14:16.282 raid_bdev1 : 7.51 94.41 283.22 0.00 0.00 14150.06 289.76 113099.68 00:14:16.282 [2024-11-27T19:12:25.918Z] =================================================================================================================== 00:14:16.282 [2024-11-27T19:12:25.918Z] Total : 94.41 283.22 0.00 0.00 14150.06 289.76 113099.68 00:14:16.282 [2024-11-27 19:12:25.886060] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:16.282 [2024-11-27 19:12:25.886162] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:16.282 [2024-11-27 19:12:25.886257] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:16.282 [2024-11-27 19:12:25.886305] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:16.282 { 00:14:16.282 "results": [ 00:14:16.282 { 00:14:16.282 "job": "raid_bdev1", 00:14:16.282 "core_mask": "0x1", 00:14:16.282 "workload": "randrw", 00:14:16.282 "percentage": 50, 00:14:16.282 "status": "finished", 00:14:16.282 "queue_depth": 2, 00:14:16.282 "io_size": 3145728, 00:14:16.282 "runtime": 7.510062, 00:14:16.282 "iops": 94.40667733502067, 00:14:16.282 "mibps": 283.220032005062, 00:14:16.282 "io_failed": 0, 00:14:16.282 "io_timeout": 0, 00:14:16.282 "avg_latency_us": 14150.05520783932, 00:14:16.282 "min_latency_us": 289.7606986899563, 00:14:16.282 "max_latency_us": 113099.68209606987 00:14:16.282 } 00:14:16.282 ], 00:14:16.282 "core_count": 1 00:14:16.282 } 00:14:16.282 19:12:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.282 19:12:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:16.282 19:12:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:14:16.282 19:12:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.282 19:12:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:16.542 19:12:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.542 19:12:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:16.542 19:12:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:16.542 19:12:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:14:16.542 19:12:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:14:16.542 19:12:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:16.542 19:12:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:14:16.542 19:12:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:16.542 19:12:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:16.542 19:12:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:16.542 19:12:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:14:16.542 19:12:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:16.542 19:12:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:16.542 19:12:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:14:16.542 /dev/nbd0 00:14:16.542 19:12:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:16.542 19:12:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:16.542 19:12:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:16.542 19:12:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:14:16.542 19:12:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:16.543 19:12:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:16.543 19:12:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:16.543 19:12:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:14:16.543 19:12:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:16.543 19:12:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:16.543 19:12:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:16.543 1+0 records in 00:14:16.543 1+0 records out 00:14:16.543 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000499529 s, 8.2 MB/s 00:14:16.543 19:12:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:16.543 19:12:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:14:16.543 19:12:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:16.803 19:12:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:16.803 19:12:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:14:16.803 19:12:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:16.803 19:12:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:16.803 19:12:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:16.803 19:12:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:14:16.803 19:12:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:14:16.803 19:12:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:16.803 19:12:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:14:16.803 19:12:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:16.803 19:12:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:14:16.803 19:12:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:16.803 19:12:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:14:16.803 19:12:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:16.803 19:12:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:16.803 19:12:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:14:16.803 /dev/nbd1 00:14:16.803 19:12:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:16.803 19:12:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:16.803 19:12:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:16.803 19:12:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:14:16.803 19:12:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:16.803 19:12:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:16.803 19:12:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:16.803 19:12:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:14:16.803 19:12:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:16.803 19:12:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:16.803 19:12:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:16.803 1+0 records in 00:14:16.803 1+0 records out 00:14:16.803 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00024377 s, 16.8 MB/s 00:14:16.803 19:12:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:16.803 19:12:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:14:16.803 19:12:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:17.064 19:12:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:17.064 19:12:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:14:17.064 19:12:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:17.064 19:12:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:17.064 19:12:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:14:17.064 19:12:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:14:17.064 19:12:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:17.064 19:12:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:14:17.064 19:12:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:17.064 19:12:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:14:17.064 19:12:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:17.064 19:12:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:17.324 19:12:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:17.324 19:12:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:17.324 19:12:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:17.324 19:12:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:17.324 19:12:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:17.324 19:12:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:17.324 19:12:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:14:17.324 19:12:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:17.324 19:12:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:17.324 19:12:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:17.324 19:12:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:17.324 19:12:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:17.324 19:12:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:14:17.324 19:12:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:17.324 19:12:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:17.585 19:12:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:17.585 19:12:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:17.585 19:12:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:17.585 19:12:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:17.585 19:12:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:17.585 19:12:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:17.585 19:12:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:14:17.585 19:12:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:17.585 19:12:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:14:17.585 19:12:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:14:17.585 19:12:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.585 19:12:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:17.585 19:12:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.585 19:12:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:17.585 19:12:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.585 19:12:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:17.585 [2024-11-27 19:12:27.084734] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:17.585 [2024-11-27 19:12:27.084830] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:17.585 [2024-11-27 19:12:27.084871] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:14:17.585 [2024-11-27 19:12:27.084899] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:17.585 [2024-11-27 19:12:27.087055] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:17.585 [2024-11-27 19:12:27.087125] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:17.585 [2024-11-27 19:12:27.087235] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:17.585 [2024-11-27 19:12:27.087302] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:17.585 [2024-11-27 19:12:27.087477] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:17.585 spare 00:14:17.585 19:12:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.585 19:12:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:14:17.585 19:12:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.585 19:12:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:17.585 [2024-11-27 19:12:27.187406] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:14:17.585 [2024-11-27 19:12:27.187494] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:17.585 [2024-11-27 19:12:27.187774] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b0d0 00:14:17.585 [2024-11-27 19:12:27.187974] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:14:17.585 [2024-11-27 19:12:27.188034] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:14:17.585 [2024-11-27 19:12:27.188262] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:17.585 19:12:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.585 19:12:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:17.585 19:12:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:17.585 19:12:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:17.585 19:12:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:17.585 19:12:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:17.585 19:12:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:17.585 19:12:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:17.585 19:12:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:17.585 19:12:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:17.585 19:12:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:17.585 19:12:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:17.585 19:12:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:17.585 19:12:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.585 19:12:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:17.585 19:12:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.845 19:12:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:17.845 "name": "raid_bdev1", 00:14:17.845 "uuid": "eb7d4e9b-f788-4d20-a642-2942c1b54db6", 00:14:17.845 "strip_size_kb": 0, 00:14:17.845 "state": "online", 00:14:17.845 "raid_level": "raid1", 00:14:17.845 "superblock": true, 00:14:17.845 "num_base_bdevs": 2, 00:14:17.845 "num_base_bdevs_discovered": 2, 00:14:17.845 "num_base_bdevs_operational": 2, 00:14:17.845 "base_bdevs_list": [ 00:14:17.845 { 00:14:17.845 "name": "spare", 00:14:17.845 "uuid": "dfe09b1a-4da0-5fd3-9f48-b6a1c7c8ed9d", 00:14:17.845 "is_configured": true, 00:14:17.845 "data_offset": 2048, 00:14:17.845 "data_size": 63488 00:14:17.845 }, 00:14:17.845 { 00:14:17.845 "name": "BaseBdev2", 00:14:17.845 "uuid": "a1bdd393-0e2e-55d3-a463-a3172ed003bb", 00:14:17.845 "is_configured": true, 00:14:17.845 "data_offset": 2048, 00:14:17.845 "data_size": 63488 00:14:17.845 } 00:14:17.845 ] 00:14:17.845 }' 00:14:17.845 19:12:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:17.845 19:12:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:18.105 19:12:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:18.105 19:12:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:18.105 19:12:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:18.106 19:12:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:18.106 19:12:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:18.106 19:12:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:18.106 19:12:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.106 19:12:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:18.106 19:12:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:18.106 19:12:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.106 19:12:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:18.106 "name": "raid_bdev1", 00:14:18.106 "uuid": "eb7d4e9b-f788-4d20-a642-2942c1b54db6", 00:14:18.106 "strip_size_kb": 0, 00:14:18.106 "state": "online", 00:14:18.106 "raid_level": "raid1", 00:14:18.106 "superblock": true, 00:14:18.106 "num_base_bdevs": 2, 00:14:18.106 "num_base_bdevs_discovered": 2, 00:14:18.106 "num_base_bdevs_operational": 2, 00:14:18.106 "base_bdevs_list": [ 00:14:18.106 { 00:14:18.106 "name": "spare", 00:14:18.106 "uuid": "dfe09b1a-4da0-5fd3-9f48-b6a1c7c8ed9d", 00:14:18.106 "is_configured": true, 00:14:18.106 "data_offset": 2048, 00:14:18.106 "data_size": 63488 00:14:18.106 }, 00:14:18.106 { 00:14:18.106 "name": "BaseBdev2", 00:14:18.106 "uuid": "a1bdd393-0e2e-55d3-a463-a3172ed003bb", 00:14:18.106 "is_configured": true, 00:14:18.106 "data_offset": 2048, 00:14:18.106 "data_size": 63488 00:14:18.106 } 00:14:18.106 ] 00:14:18.106 }' 00:14:18.106 19:12:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:18.106 19:12:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:18.106 19:12:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:18.366 19:12:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:18.366 19:12:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:18.366 19:12:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.366 19:12:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:18.366 19:12:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:14:18.366 19:12:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.366 19:12:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:14:18.366 19:12:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:18.366 19:12:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.366 19:12:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:18.366 [2024-11-27 19:12:27.815611] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:18.366 19:12:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.366 19:12:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:18.366 19:12:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:18.366 19:12:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:18.366 19:12:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:18.366 19:12:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:18.366 19:12:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:18.366 19:12:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:18.366 19:12:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:18.366 19:12:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:18.366 19:12:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:18.366 19:12:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:18.366 19:12:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.366 19:12:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:18.366 19:12:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:18.366 19:12:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.366 19:12:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:18.366 "name": "raid_bdev1", 00:14:18.366 "uuid": "eb7d4e9b-f788-4d20-a642-2942c1b54db6", 00:14:18.366 "strip_size_kb": 0, 00:14:18.366 "state": "online", 00:14:18.366 "raid_level": "raid1", 00:14:18.366 "superblock": true, 00:14:18.366 "num_base_bdevs": 2, 00:14:18.366 "num_base_bdevs_discovered": 1, 00:14:18.366 "num_base_bdevs_operational": 1, 00:14:18.366 "base_bdevs_list": [ 00:14:18.366 { 00:14:18.366 "name": null, 00:14:18.366 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:18.366 "is_configured": false, 00:14:18.366 "data_offset": 0, 00:14:18.366 "data_size": 63488 00:14:18.366 }, 00:14:18.366 { 00:14:18.366 "name": "BaseBdev2", 00:14:18.366 "uuid": "a1bdd393-0e2e-55d3-a463-a3172ed003bb", 00:14:18.366 "is_configured": true, 00:14:18.366 "data_offset": 2048, 00:14:18.366 "data_size": 63488 00:14:18.366 } 00:14:18.366 ] 00:14:18.366 }' 00:14:18.366 19:12:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:18.366 19:12:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:18.626 19:12:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:18.626 19:12:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.626 19:12:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:18.626 [2024-11-27 19:12:28.255054] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:18.626 [2024-11-27 19:12:28.255247] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:14:18.626 [2024-11-27 19:12:28.255263] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:18.626 [2024-11-27 19:12:28.255302] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:18.885 [2024-11-27 19:12:28.271358] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b1a0 00:14:18.885 19:12:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.885 19:12:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:14:18.885 [2024-11-27 19:12:28.273244] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:19.824 19:12:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:19.824 19:12:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:19.824 19:12:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:19.824 19:12:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:19.824 19:12:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:19.824 19:12:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:19.824 19:12:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:19.824 19:12:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.824 19:12:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:19.824 19:12:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.824 19:12:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:19.824 "name": "raid_bdev1", 00:14:19.824 "uuid": "eb7d4e9b-f788-4d20-a642-2942c1b54db6", 00:14:19.824 "strip_size_kb": 0, 00:14:19.824 "state": "online", 00:14:19.824 "raid_level": "raid1", 00:14:19.824 "superblock": true, 00:14:19.824 "num_base_bdevs": 2, 00:14:19.824 "num_base_bdevs_discovered": 2, 00:14:19.824 "num_base_bdevs_operational": 2, 00:14:19.824 "process": { 00:14:19.824 "type": "rebuild", 00:14:19.824 "target": "spare", 00:14:19.824 "progress": { 00:14:19.824 "blocks": 20480, 00:14:19.824 "percent": 32 00:14:19.824 } 00:14:19.824 }, 00:14:19.824 "base_bdevs_list": [ 00:14:19.824 { 00:14:19.824 "name": "spare", 00:14:19.824 "uuid": "dfe09b1a-4da0-5fd3-9f48-b6a1c7c8ed9d", 00:14:19.824 "is_configured": true, 00:14:19.824 "data_offset": 2048, 00:14:19.824 "data_size": 63488 00:14:19.824 }, 00:14:19.824 { 00:14:19.824 "name": "BaseBdev2", 00:14:19.824 "uuid": "a1bdd393-0e2e-55d3-a463-a3172ed003bb", 00:14:19.824 "is_configured": true, 00:14:19.824 "data_offset": 2048, 00:14:19.824 "data_size": 63488 00:14:19.824 } 00:14:19.824 ] 00:14:19.824 }' 00:14:19.824 19:12:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:19.824 19:12:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:19.824 19:12:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:19.824 19:12:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:19.824 19:12:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:14:19.824 19:12:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.824 19:12:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:19.824 [2024-11-27 19:12:29.416921] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:20.084 [2024-11-27 19:12:29.478294] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:20.084 [2024-11-27 19:12:29.478444] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:20.084 [2024-11-27 19:12:29.478461] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:20.084 [2024-11-27 19:12:29.478470] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:20.084 19:12:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.084 19:12:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:20.084 19:12:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:20.084 19:12:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:20.084 19:12:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:20.084 19:12:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:20.084 19:12:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:20.084 19:12:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:20.084 19:12:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:20.084 19:12:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:20.084 19:12:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:20.084 19:12:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:20.084 19:12:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:20.084 19:12:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.084 19:12:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:20.084 19:12:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.084 19:12:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:20.084 "name": "raid_bdev1", 00:14:20.084 "uuid": "eb7d4e9b-f788-4d20-a642-2942c1b54db6", 00:14:20.084 "strip_size_kb": 0, 00:14:20.084 "state": "online", 00:14:20.084 "raid_level": "raid1", 00:14:20.084 "superblock": true, 00:14:20.084 "num_base_bdevs": 2, 00:14:20.084 "num_base_bdevs_discovered": 1, 00:14:20.084 "num_base_bdevs_operational": 1, 00:14:20.084 "base_bdevs_list": [ 00:14:20.084 { 00:14:20.084 "name": null, 00:14:20.084 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:20.084 "is_configured": false, 00:14:20.084 "data_offset": 0, 00:14:20.084 "data_size": 63488 00:14:20.084 }, 00:14:20.084 { 00:14:20.084 "name": "BaseBdev2", 00:14:20.084 "uuid": "a1bdd393-0e2e-55d3-a463-a3172ed003bb", 00:14:20.084 "is_configured": true, 00:14:20.084 "data_offset": 2048, 00:14:20.084 "data_size": 63488 00:14:20.084 } 00:14:20.084 ] 00:14:20.084 }' 00:14:20.084 19:12:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:20.084 19:12:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:20.345 19:12:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:20.345 19:12:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.345 19:12:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:20.345 [2024-11-27 19:12:29.955531] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:20.345 [2024-11-27 19:12:29.955646] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:20.345 [2024-11-27 19:12:29.955688] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:14:20.345 [2024-11-27 19:12:29.955749] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:20.345 [2024-11-27 19:12:29.956239] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:20.345 [2024-11-27 19:12:29.956304] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:20.345 [2024-11-27 19:12:29.956427] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:20.345 [2024-11-27 19:12:29.956471] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:14:20.345 [2024-11-27 19:12:29.956513] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:20.345 [2024-11-27 19:12:29.956587] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:20.345 [2024-11-27 19:12:29.971657] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b270 00:14:20.345 spare 00:14:20.345 19:12:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.345 19:12:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:14:20.345 [2024-11-27 19:12:29.973490] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:21.726 19:12:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:21.726 19:12:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:21.726 19:12:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:21.726 19:12:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:21.726 19:12:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:21.726 19:12:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:21.726 19:12:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:21.726 19:12:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.726 19:12:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:21.726 19:12:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.726 19:12:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:21.726 "name": "raid_bdev1", 00:14:21.726 "uuid": "eb7d4e9b-f788-4d20-a642-2942c1b54db6", 00:14:21.726 "strip_size_kb": 0, 00:14:21.726 "state": "online", 00:14:21.726 "raid_level": "raid1", 00:14:21.726 "superblock": true, 00:14:21.726 "num_base_bdevs": 2, 00:14:21.726 "num_base_bdevs_discovered": 2, 00:14:21.726 "num_base_bdevs_operational": 2, 00:14:21.726 "process": { 00:14:21.726 "type": "rebuild", 00:14:21.726 "target": "spare", 00:14:21.726 "progress": { 00:14:21.726 "blocks": 20480, 00:14:21.726 "percent": 32 00:14:21.726 } 00:14:21.726 }, 00:14:21.726 "base_bdevs_list": [ 00:14:21.726 { 00:14:21.726 "name": "spare", 00:14:21.726 "uuid": "dfe09b1a-4da0-5fd3-9f48-b6a1c7c8ed9d", 00:14:21.726 "is_configured": true, 00:14:21.726 "data_offset": 2048, 00:14:21.726 "data_size": 63488 00:14:21.726 }, 00:14:21.726 { 00:14:21.726 "name": "BaseBdev2", 00:14:21.726 "uuid": "a1bdd393-0e2e-55d3-a463-a3172ed003bb", 00:14:21.726 "is_configured": true, 00:14:21.726 "data_offset": 2048, 00:14:21.726 "data_size": 63488 00:14:21.726 } 00:14:21.726 ] 00:14:21.726 }' 00:14:21.726 19:12:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:21.726 19:12:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:21.726 19:12:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:21.726 19:12:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:21.726 19:12:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:14:21.727 19:12:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.727 19:12:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:21.727 [2024-11-27 19:12:31.137256] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:21.727 [2024-11-27 19:12:31.178124] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:21.727 [2024-11-27 19:12:31.178250] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:21.727 [2024-11-27 19:12:31.178290] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:21.727 [2024-11-27 19:12:31.178311] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:21.727 19:12:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.727 19:12:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:21.727 19:12:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:21.727 19:12:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:21.727 19:12:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:21.727 19:12:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:21.727 19:12:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:21.727 19:12:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:21.727 19:12:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:21.727 19:12:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:21.727 19:12:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:21.727 19:12:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:21.727 19:12:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:21.727 19:12:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.727 19:12:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:21.727 19:12:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.727 19:12:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:21.727 "name": "raid_bdev1", 00:14:21.727 "uuid": "eb7d4e9b-f788-4d20-a642-2942c1b54db6", 00:14:21.727 "strip_size_kb": 0, 00:14:21.727 "state": "online", 00:14:21.727 "raid_level": "raid1", 00:14:21.727 "superblock": true, 00:14:21.727 "num_base_bdevs": 2, 00:14:21.727 "num_base_bdevs_discovered": 1, 00:14:21.727 "num_base_bdevs_operational": 1, 00:14:21.727 "base_bdevs_list": [ 00:14:21.727 { 00:14:21.727 "name": null, 00:14:21.727 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:21.727 "is_configured": false, 00:14:21.727 "data_offset": 0, 00:14:21.727 "data_size": 63488 00:14:21.727 }, 00:14:21.727 { 00:14:21.727 "name": "BaseBdev2", 00:14:21.727 "uuid": "a1bdd393-0e2e-55d3-a463-a3172ed003bb", 00:14:21.727 "is_configured": true, 00:14:21.727 "data_offset": 2048, 00:14:21.727 "data_size": 63488 00:14:21.727 } 00:14:21.727 ] 00:14:21.727 }' 00:14:21.727 19:12:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:21.727 19:12:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:21.986 19:12:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:21.986 19:12:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:21.986 19:12:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:21.986 19:12:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:21.986 19:12:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:21.986 19:12:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:21.986 19:12:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.986 19:12:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:21.986 19:12:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:22.246 19:12:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.246 19:12:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:22.246 "name": "raid_bdev1", 00:14:22.246 "uuid": "eb7d4e9b-f788-4d20-a642-2942c1b54db6", 00:14:22.246 "strip_size_kb": 0, 00:14:22.246 "state": "online", 00:14:22.246 "raid_level": "raid1", 00:14:22.246 "superblock": true, 00:14:22.246 "num_base_bdevs": 2, 00:14:22.246 "num_base_bdevs_discovered": 1, 00:14:22.246 "num_base_bdevs_operational": 1, 00:14:22.246 "base_bdevs_list": [ 00:14:22.246 { 00:14:22.246 "name": null, 00:14:22.246 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:22.246 "is_configured": false, 00:14:22.246 "data_offset": 0, 00:14:22.246 "data_size": 63488 00:14:22.246 }, 00:14:22.246 { 00:14:22.246 "name": "BaseBdev2", 00:14:22.246 "uuid": "a1bdd393-0e2e-55d3-a463-a3172ed003bb", 00:14:22.246 "is_configured": true, 00:14:22.246 "data_offset": 2048, 00:14:22.246 "data_size": 63488 00:14:22.246 } 00:14:22.246 ] 00:14:22.246 }' 00:14:22.246 19:12:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:22.246 19:12:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:22.246 19:12:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:22.246 19:12:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:22.246 19:12:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:14:22.246 19:12:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.246 19:12:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:22.246 19:12:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.246 19:12:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:22.246 19:12:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.246 19:12:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:22.246 [2024-11-27 19:12:31.771147] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:22.246 [2024-11-27 19:12:31.771247] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:22.246 [2024-11-27 19:12:31.771312] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:14:22.246 [2024-11-27 19:12:31.771345] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:22.246 [2024-11-27 19:12:31.771841] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:22.246 [2024-11-27 19:12:31.771898] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:22.246 [2024-11-27 19:12:31.772008] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:14:22.246 [2024-11-27 19:12:31.772049] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:14:22.246 [2024-11-27 19:12:31.772090] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:22.246 [2024-11-27 19:12:31.772130] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:14:22.246 BaseBdev1 00:14:22.246 19:12:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.246 19:12:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:14:23.187 19:12:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:23.187 19:12:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:23.187 19:12:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:23.187 19:12:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:23.187 19:12:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:23.187 19:12:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:23.187 19:12:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:23.187 19:12:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:23.187 19:12:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:23.187 19:12:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:23.187 19:12:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:23.187 19:12:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.187 19:12:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:23.187 19:12:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:23.187 19:12:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.446 19:12:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:23.446 "name": "raid_bdev1", 00:14:23.446 "uuid": "eb7d4e9b-f788-4d20-a642-2942c1b54db6", 00:14:23.446 "strip_size_kb": 0, 00:14:23.446 "state": "online", 00:14:23.446 "raid_level": "raid1", 00:14:23.446 "superblock": true, 00:14:23.446 "num_base_bdevs": 2, 00:14:23.446 "num_base_bdevs_discovered": 1, 00:14:23.446 "num_base_bdevs_operational": 1, 00:14:23.446 "base_bdevs_list": [ 00:14:23.446 { 00:14:23.446 "name": null, 00:14:23.446 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:23.446 "is_configured": false, 00:14:23.446 "data_offset": 0, 00:14:23.446 "data_size": 63488 00:14:23.446 }, 00:14:23.446 { 00:14:23.446 "name": "BaseBdev2", 00:14:23.446 "uuid": "a1bdd393-0e2e-55d3-a463-a3172ed003bb", 00:14:23.446 "is_configured": true, 00:14:23.446 "data_offset": 2048, 00:14:23.446 "data_size": 63488 00:14:23.446 } 00:14:23.446 ] 00:14:23.446 }' 00:14:23.446 19:12:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:23.446 19:12:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:23.706 19:12:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:23.706 19:12:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:23.706 19:12:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:23.706 19:12:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:23.706 19:12:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:23.706 19:12:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:23.706 19:12:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:23.706 19:12:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.706 19:12:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:23.706 19:12:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.706 19:12:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:23.706 "name": "raid_bdev1", 00:14:23.706 "uuid": "eb7d4e9b-f788-4d20-a642-2942c1b54db6", 00:14:23.706 "strip_size_kb": 0, 00:14:23.706 "state": "online", 00:14:23.706 "raid_level": "raid1", 00:14:23.706 "superblock": true, 00:14:23.706 "num_base_bdevs": 2, 00:14:23.706 "num_base_bdevs_discovered": 1, 00:14:23.706 "num_base_bdevs_operational": 1, 00:14:23.706 "base_bdevs_list": [ 00:14:23.706 { 00:14:23.706 "name": null, 00:14:23.706 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:23.706 "is_configured": false, 00:14:23.706 "data_offset": 0, 00:14:23.706 "data_size": 63488 00:14:23.706 }, 00:14:23.706 { 00:14:23.706 "name": "BaseBdev2", 00:14:23.706 "uuid": "a1bdd393-0e2e-55d3-a463-a3172ed003bb", 00:14:23.706 "is_configured": true, 00:14:23.706 "data_offset": 2048, 00:14:23.706 "data_size": 63488 00:14:23.706 } 00:14:23.706 ] 00:14:23.706 }' 00:14:23.706 19:12:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:23.966 19:12:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:23.966 19:12:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:23.966 19:12:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:23.966 19:12:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:23.966 19:12:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # local es=0 00:14:23.966 19:12:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:23.966 19:12:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:14:23.966 19:12:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:23.966 19:12:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:14:23.966 19:12:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:23.966 19:12:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:23.966 19:12:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.966 19:12:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:23.966 [2024-11-27 19:12:33.400558] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:23.966 [2024-11-27 19:12:33.400734] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:14:23.966 [2024-11-27 19:12:33.400763] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:23.966 request: 00:14:23.966 { 00:14:23.966 "base_bdev": "BaseBdev1", 00:14:23.966 "raid_bdev": "raid_bdev1", 00:14:23.966 "method": "bdev_raid_add_base_bdev", 00:14:23.966 "req_id": 1 00:14:23.966 } 00:14:23.966 Got JSON-RPC error response 00:14:23.966 response: 00:14:23.966 { 00:14:23.966 "code": -22, 00:14:23.966 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:14:23.966 } 00:14:23.966 19:12:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:14:23.966 19:12:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # es=1 00:14:23.966 19:12:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:23.966 19:12:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:23.966 19:12:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:23.966 19:12:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:14:24.979 19:12:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:24.979 19:12:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:24.979 19:12:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:24.979 19:12:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:24.979 19:12:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:24.979 19:12:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:24.979 19:12:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:24.979 19:12:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:24.979 19:12:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:24.979 19:12:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:24.979 19:12:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.979 19:12:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:24.979 19:12:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.979 19:12:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:24.979 19:12:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.979 19:12:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:24.979 "name": "raid_bdev1", 00:14:24.979 "uuid": "eb7d4e9b-f788-4d20-a642-2942c1b54db6", 00:14:24.979 "strip_size_kb": 0, 00:14:24.979 "state": "online", 00:14:24.979 "raid_level": "raid1", 00:14:24.979 "superblock": true, 00:14:24.979 "num_base_bdevs": 2, 00:14:24.979 "num_base_bdevs_discovered": 1, 00:14:24.979 "num_base_bdevs_operational": 1, 00:14:24.979 "base_bdevs_list": [ 00:14:24.979 { 00:14:24.979 "name": null, 00:14:24.979 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:24.979 "is_configured": false, 00:14:24.979 "data_offset": 0, 00:14:24.979 "data_size": 63488 00:14:24.979 }, 00:14:24.979 { 00:14:24.979 "name": "BaseBdev2", 00:14:24.979 "uuid": "a1bdd393-0e2e-55d3-a463-a3172ed003bb", 00:14:24.979 "is_configured": true, 00:14:24.979 "data_offset": 2048, 00:14:24.979 "data_size": 63488 00:14:24.979 } 00:14:24.979 ] 00:14:24.979 }' 00:14:24.979 19:12:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:24.979 19:12:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:25.239 19:12:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:25.239 19:12:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:25.239 19:12:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:25.240 19:12:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:25.240 19:12:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:25.240 19:12:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:25.240 19:12:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:25.240 19:12:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.240 19:12:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:25.240 19:12:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.240 19:12:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:25.240 "name": "raid_bdev1", 00:14:25.240 "uuid": "eb7d4e9b-f788-4d20-a642-2942c1b54db6", 00:14:25.240 "strip_size_kb": 0, 00:14:25.240 "state": "online", 00:14:25.240 "raid_level": "raid1", 00:14:25.240 "superblock": true, 00:14:25.240 "num_base_bdevs": 2, 00:14:25.240 "num_base_bdevs_discovered": 1, 00:14:25.240 "num_base_bdevs_operational": 1, 00:14:25.240 "base_bdevs_list": [ 00:14:25.240 { 00:14:25.240 "name": null, 00:14:25.240 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:25.240 "is_configured": false, 00:14:25.240 "data_offset": 0, 00:14:25.240 "data_size": 63488 00:14:25.240 }, 00:14:25.240 { 00:14:25.240 "name": "BaseBdev2", 00:14:25.240 "uuid": "a1bdd393-0e2e-55d3-a463-a3172ed003bb", 00:14:25.240 "is_configured": true, 00:14:25.240 "data_offset": 2048, 00:14:25.240 "data_size": 63488 00:14:25.240 } 00:14:25.240 ] 00:14:25.240 }' 00:14:25.240 19:12:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:25.501 19:12:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:25.501 19:12:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:25.501 19:12:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:25.501 19:12:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 76952 00:14:25.501 19:12:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # '[' -z 76952 ']' 00:14:25.501 19:12:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # kill -0 76952 00:14:25.501 19:12:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # uname 00:14:25.501 19:12:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:25.501 19:12:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76952 00:14:25.501 killing process with pid 76952 00:14:25.501 Received shutdown signal, test time was about 16.670670 seconds 00:14:25.501 00:14:25.501 Latency(us) 00:14:25.501 [2024-11-27T19:12:35.137Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:25.501 [2024-11-27T19:12:35.137Z] =================================================================================================================== 00:14:25.501 [2024-11-27T19:12:35.137Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:25.501 19:12:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:25.501 19:12:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:25.501 19:12:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76952' 00:14:25.501 19:12:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@973 -- # kill 76952 00:14:25.501 [2024-11-27 19:12:35.007776] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:25.501 [2024-11-27 19:12:35.007902] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:25.501 [2024-11-27 19:12:35.007953] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:25.501 [2024-11-27 19:12:35.007964] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:14:25.501 19:12:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@978 -- # wait 76952 00:14:25.761 [2024-11-27 19:12:35.226039] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:27.140 19:12:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:14:27.141 00:14:27.141 real 0m19.799s 00:14:27.141 user 0m25.735s 00:14:27.141 sys 0m2.255s 00:14:27.141 ************************************ 00:14:27.141 END TEST raid_rebuild_test_sb_io 00:14:27.141 ************************************ 00:14:27.141 19:12:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:27.141 19:12:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:27.141 19:12:36 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:14:27.141 19:12:36 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false true 00:14:27.141 19:12:36 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:14:27.141 19:12:36 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:27.141 19:12:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:27.141 ************************************ 00:14:27.141 START TEST raid_rebuild_test 00:14:27.141 ************************************ 00:14:27.141 19:12:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 false false true 00:14:27.141 19:12:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:14:27.141 19:12:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:14:27.141 19:12:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:14:27.141 19:12:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:14:27.141 19:12:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:27.141 19:12:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:27.141 19:12:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:27.141 19:12:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:27.141 19:12:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:27.141 19:12:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:27.141 19:12:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:27.141 19:12:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:27.141 19:12:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:27.141 19:12:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:14:27.141 19:12:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:27.141 19:12:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:27.141 19:12:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:14:27.141 19:12:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:27.141 19:12:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:27.141 19:12:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:27.141 19:12:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:27.141 19:12:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:27.141 19:12:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:27.141 19:12:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:27.141 19:12:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:27.141 19:12:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:27.141 19:12:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:14:27.141 19:12:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:14:27.141 19:12:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:14:27.141 19:12:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=77635 00:14:27.141 19:12:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:27.141 19:12:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 77635 00:14:27.141 19:12:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 77635 ']' 00:14:27.141 19:12:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:27.141 19:12:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:27.141 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:27.141 19:12:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:27.141 19:12:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:27.141 19:12:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.141 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:27.141 Zero copy mechanism will not be used. 00:14:27.141 [2024-11-27 19:12:36.515058] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:14:27.141 [2024-11-27 19:12:36.515159] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77635 ] 00:14:27.141 [2024-11-27 19:12:36.688680] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:27.400 [2024-11-27 19:12:36.799149] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:27.401 [2024-11-27 19:12:36.971714] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:27.401 [2024-11-27 19:12:36.971752] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:27.969 19:12:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:27.969 19:12:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:14:27.969 19:12:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:27.969 19:12:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:27.969 19:12:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.969 19:12:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.969 BaseBdev1_malloc 00:14:27.969 19:12:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.969 19:12:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:27.969 19:12:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.969 19:12:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.969 [2024-11-27 19:12:37.377894] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:27.969 [2024-11-27 19:12:37.377955] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:27.969 [2024-11-27 19:12:37.377977] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:27.969 [2024-11-27 19:12:37.377988] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:27.969 [2024-11-27 19:12:37.379951] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:27.969 [2024-11-27 19:12:37.379992] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:27.969 BaseBdev1 00:14:27.969 19:12:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.969 19:12:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:27.969 19:12:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:27.969 19:12:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.969 19:12:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.969 BaseBdev2_malloc 00:14:27.969 19:12:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.969 19:12:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:27.969 19:12:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.969 19:12:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.969 [2024-11-27 19:12:37.432284] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:27.969 [2024-11-27 19:12:37.432339] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:27.969 [2024-11-27 19:12:37.432361] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:27.969 [2024-11-27 19:12:37.432371] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:27.969 [2024-11-27 19:12:37.434388] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:27.969 [2024-11-27 19:12:37.434427] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:27.969 BaseBdev2 00:14:27.969 19:12:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.969 19:12:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:27.969 19:12:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:27.969 19:12:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.969 19:12:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.969 BaseBdev3_malloc 00:14:27.969 19:12:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.969 19:12:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:14:27.969 19:12:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.969 19:12:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.969 [2024-11-27 19:12:37.518132] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:14:27.970 [2024-11-27 19:12:37.518251] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:27.970 [2024-11-27 19:12:37.518276] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:27.970 [2024-11-27 19:12:37.518287] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:27.970 [2024-11-27 19:12:37.520379] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:27.970 [2024-11-27 19:12:37.520419] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:27.970 BaseBdev3 00:14:27.970 19:12:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.970 19:12:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:27.970 19:12:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:27.970 19:12:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.970 19:12:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.970 BaseBdev4_malloc 00:14:27.970 19:12:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.970 19:12:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:14:27.970 19:12:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.970 19:12:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.970 [2024-11-27 19:12:37.567304] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:14:27.970 [2024-11-27 19:12:37.567360] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:27.970 [2024-11-27 19:12:37.567380] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:27.970 [2024-11-27 19:12:37.567390] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:27.970 [2024-11-27 19:12:37.569358] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:27.970 [2024-11-27 19:12:37.569400] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:27.970 BaseBdev4 00:14:27.970 19:12:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.970 19:12:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:27.970 19:12:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.970 19:12:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.228 spare_malloc 00:14:28.228 19:12:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.228 19:12:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:28.228 19:12:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.228 19:12:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.228 spare_delay 00:14:28.228 19:12:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.228 19:12:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:28.228 19:12:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.228 19:12:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.228 [2024-11-27 19:12:37.632729] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:28.228 [2024-11-27 19:12:37.632775] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:28.228 [2024-11-27 19:12:37.632792] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:14:28.228 [2024-11-27 19:12:37.632803] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:28.228 [2024-11-27 19:12:37.634748] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:28.228 [2024-11-27 19:12:37.634832] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:28.228 spare 00:14:28.228 19:12:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.228 19:12:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:14:28.228 19:12:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.228 19:12:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.228 [2024-11-27 19:12:37.644751] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:28.228 [2024-11-27 19:12:37.646429] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:28.228 [2024-11-27 19:12:37.646495] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:28.228 [2024-11-27 19:12:37.646543] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:28.228 [2024-11-27 19:12:37.646614] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:28.228 [2024-11-27 19:12:37.646626] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:14:28.228 [2024-11-27 19:12:37.646870] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:28.228 [2024-11-27 19:12:37.647038] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:28.228 [2024-11-27 19:12:37.647051] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:28.228 [2024-11-27 19:12:37.647202] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:28.228 19:12:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.228 19:12:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:14:28.228 19:12:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:28.228 19:12:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:28.228 19:12:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:28.228 19:12:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:28.228 19:12:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:28.228 19:12:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:28.228 19:12:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:28.228 19:12:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:28.228 19:12:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:28.228 19:12:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:28.228 19:12:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:28.228 19:12:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.228 19:12:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.228 19:12:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.228 19:12:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:28.228 "name": "raid_bdev1", 00:14:28.228 "uuid": "3eb31bc4-6fe0-4a45-a655-e93dbeb996cb", 00:14:28.228 "strip_size_kb": 0, 00:14:28.228 "state": "online", 00:14:28.228 "raid_level": "raid1", 00:14:28.228 "superblock": false, 00:14:28.228 "num_base_bdevs": 4, 00:14:28.228 "num_base_bdevs_discovered": 4, 00:14:28.228 "num_base_bdevs_operational": 4, 00:14:28.228 "base_bdevs_list": [ 00:14:28.228 { 00:14:28.228 "name": "BaseBdev1", 00:14:28.228 "uuid": "d5073664-38b4-57d0-9368-ede2a4bb5cf2", 00:14:28.228 "is_configured": true, 00:14:28.228 "data_offset": 0, 00:14:28.228 "data_size": 65536 00:14:28.228 }, 00:14:28.228 { 00:14:28.228 "name": "BaseBdev2", 00:14:28.228 "uuid": "cb889b61-4d78-5ddf-a96e-ed18706417ea", 00:14:28.228 "is_configured": true, 00:14:28.228 "data_offset": 0, 00:14:28.228 "data_size": 65536 00:14:28.228 }, 00:14:28.228 { 00:14:28.228 "name": "BaseBdev3", 00:14:28.228 "uuid": "00b1ca3b-22c8-5e0f-a23d-84a4cc9eec30", 00:14:28.228 "is_configured": true, 00:14:28.228 "data_offset": 0, 00:14:28.228 "data_size": 65536 00:14:28.228 }, 00:14:28.228 { 00:14:28.228 "name": "BaseBdev4", 00:14:28.228 "uuid": "e889afd1-6f2d-549b-b949-74cd80b3245c", 00:14:28.228 "is_configured": true, 00:14:28.228 "data_offset": 0, 00:14:28.228 "data_size": 65536 00:14:28.228 } 00:14:28.228 ] 00:14:28.228 }' 00:14:28.228 19:12:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:28.228 19:12:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.487 19:12:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:28.487 19:12:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:28.487 19:12:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.487 19:12:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.487 [2024-11-27 19:12:38.068274] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:28.487 19:12:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.487 19:12:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:14:28.487 19:12:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:28.487 19:12:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.487 19:12:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.487 19:12:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:28.487 19:12:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.746 19:12:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:14:28.747 19:12:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:14:28.747 19:12:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:14:28.747 19:12:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:14:28.747 19:12:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:14:28.747 19:12:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:28.747 19:12:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:14:28.747 19:12:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:28.747 19:12:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:28.747 19:12:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:28.747 19:12:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:14:28.747 19:12:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:28.747 19:12:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:28.747 19:12:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:14:28.747 [2024-11-27 19:12:38.351603] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:14:28.747 /dev/nbd0 00:14:29.006 19:12:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:29.006 19:12:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:29.006 19:12:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:29.006 19:12:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:14:29.006 19:12:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:29.006 19:12:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:29.006 19:12:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:29.006 19:12:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:14:29.006 19:12:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:29.006 19:12:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:29.006 19:12:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:29.006 1+0 records in 00:14:29.006 1+0 records out 00:14:29.006 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000445243 s, 9.2 MB/s 00:14:29.006 19:12:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:29.006 19:12:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:14:29.006 19:12:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:29.006 19:12:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:29.007 19:12:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:14:29.007 19:12:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:29.007 19:12:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:29.007 19:12:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:14:29.007 19:12:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:14:29.007 19:12:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:14:34.287 65536+0 records in 00:14:34.287 65536+0 records out 00:14:34.287 33554432 bytes (34 MB, 32 MiB) copied, 5.04013 s, 6.7 MB/s 00:14:34.287 19:12:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:34.287 19:12:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:34.287 19:12:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:34.287 19:12:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:34.287 19:12:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:14:34.287 19:12:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:34.287 19:12:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:34.287 19:12:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:34.287 [2024-11-27 19:12:43.689750] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:34.287 19:12:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:34.287 19:12:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:34.287 19:12:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:34.287 19:12:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:34.287 19:12:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:34.287 19:12:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:34.287 19:12:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:34.287 19:12:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:34.287 19:12:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.287 19:12:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.287 [2024-11-27 19:12:43.705809] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:34.287 19:12:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.287 19:12:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:34.287 19:12:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:34.287 19:12:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:34.287 19:12:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:34.287 19:12:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:34.287 19:12:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:34.287 19:12:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:34.287 19:12:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:34.287 19:12:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:34.287 19:12:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:34.287 19:12:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:34.287 19:12:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.287 19:12:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.287 19:12:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:34.287 19:12:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.287 19:12:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:34.287 "name": "raid_bdev1", 00:14:34.287 "uuid": "3eb31bc4-6fe0-4a45-a655-e93dbeb996cb", 00:14:34.287 "strip_size_kb": 0, 00:14:34.287 "state": "online", 00:14:34.287 "raid_level": "raid1", 00:14:34.287 "superblock": false, 00:14:34.287 "num_base_bdevs": 4, 00:14:34.287 "num_base_bdevs_discovered": 3, 00:14:34.287 "num_base_bdevs_operational": 3, 00:14:34.287 "base_bdevs_list": [ 00:14:34.287 { 00:14:34.287 "name": null, 00:14:34.287 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:34.287 "is_configured": false, 00:14:34.287 "data_offset": 0, 00:14:34.287 "data_size": 65536 00:14:34.287 }, 00:14:34.287 { 00:14:34.287 "name": "BaseBdev2", 00:14:34.287 "uuid": "cb889b61-4d78-5ddf-a96e-ed18706417ea", 00:14:34.287 "is_configured": true, 00:14:34.287 "data_offset": 0, 00:14:34.287 "data_size": 65536 00:14:34.287 }, 00:14:34.287 { 00:14:34.287 "name": "BaseBdev3", 00:14:34.287 "uuid": "00b1ca3b-22c8-5e0f-a23d-84a4cc9eec30", 00:14:34.287 "is_configured": true, 00:14:34.287 "data_offset": 0, 00:14:34.287 "data_size": 65536 00:14:34.287 }, 00:14:34.287 { 00:14:34.287 "name": "BaseBdev4", 00:14:34.287 "uuid": "e889afd1-6f2d-549b-b949-74cd80b3245c", 00:14:34.287 "is_configured": true, 00:14:34.287 "data_offset": 0, 00:14:34.287 "data_size": 65536 00:14:34.287 } 00:14:34.287 ] 00:14:34.287 }' 00:14:34.287 19:12:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:34.287 19:12:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.857 19:12:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:34.857 19:12:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.857 19:12:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.857 [2024-11-27 19:12:44.204972] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:34.857 [2024-11-27 19:12:44.220783] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09d70 00:14:34.857 19:12:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.857 19:12:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:34.857 [2024-11-27 19:12:44.222671] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:35.797 19:12:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:35.797 19:12:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:35.797 19:12:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:35.797 19:12:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:35.797 19:12:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:35.797 19:12:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:35.797 19:12:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.797 19:12:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:35.797 19:12:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.797 19:12:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.797 19:12:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:35.797 "name": "raid_bdev1", 00:14:35.797 "uuid": "3eb31bc4-6fe0-4a45-a655-e93dbeb996cb", 00:14:35.797 "strip_size_kb": 0, 00:14:35.797 "state": "online", 00:14:35.797 "raid_level": "raid1", 00:14:35.797 "superblock": false, 00:14:35.797 "num_base_bdevs": 4, 00:14:35.797 "num_base_bdevs_discovered": 4, 00:14:35.797 "num_base_bdevs_operational": 4, 00:14:35.797 "process": { 00:14:35.797 "type": "rebuild", 00:14:35.797 "target": "spare", 00:14:35.797 "progress": { 00:14:35.797 "blocks": 20480, 00:14:35.797 "percent": 31 00:14:35.797 } 00:14:35.797 }, 00:14:35.797 "base_bdevs_list": [ 00:14:35.797 { 00:14:35.797 "name": "spare", 00:14:35.797 "uuid": "ab045a89-43fa-5fa3-b5d3-ff3c6eccf66d", 00:14:35.797 "is_configured": true, 00:14:35.797 "data_offset": 0, 00:14:35.797 "data_size": 65536 00:14:35.797 }, 00:14:35.797 { 00:14:35.797 "name": "BaseBdev2", 00:14:35.797 "uuid": "cb889b61-4d78-5ddf-a96e-ed18706417ea", 00:14:35.797 "is_configured": true, 00:14:35.797 "data_offset": 0, 00:14:35.797 "data_size": 65536 00:14:35.797 }, 00:14:35.797 { 00:14:35.797 "name": "BaseBdev3", 00:14:35.797 "uuid": "00b1ca3b-22c8-5e0f-a23d-84a4cc9eec30", 00:14:35.797 "is_configured": true, 00:14:35.797 "data_offset": 0, 00:14:35.797 "data_size": 65536 00:14:35.797 }, 00:14:35.797 { 00:14:35.797 "name": "BaseBdev4", 00:14:35.797 "uuid": "e889afd1-6f2d-549b-b949-74cd80b3245c", 00:14:35.797 "is_configured": true, 00:14:35.797 "data_offset": 0, 00:14:35.797 "data_size": 65536 00:14:35.797 } 00:14:35.797 ] 00:14:35.797 }' 00:14:35.797 19:12:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:35.797 19:12:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:35.797 19:12:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:35.797 19:12:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:35.797 19:12:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:35.797 19:12:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.797 19:12:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.797 [2024-11-27 19:12:45.377924] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:35.797 [2024-11-27 19:12:45.427379] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:35.797 [2024-11-27 19:12:45.427531] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:35.797 [2024-11-27 19:12:45.427586] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:35.797 [2024-11-27 19:12:45.427611] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:36.057 19:12:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.057 19:12:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:36.057 19:12:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:36.057 19:12:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:36.057 19:12:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:36.057 19:12:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:36.057 19:12:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:36.057 19:12:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:36.057 19:12:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:36.057 19:12:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:36.057 19:12:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:36.057 19:12:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:36.057 19:12:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:36.057 19:12:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.057 19:12:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.057 19:12:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.057 19:12:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:36.057 "name": "raid_bdev1", 00:14:36.057 "uuid": "3eb31bc4-6fe0-4a45-a655-e93dbeb996cb", 00:14:36.057 "strip_size_kb": 0, 00:14:36.057 "state": "online", 00:14:36.057 "raid_level": "raid1", 00:14:36.057 "superblock": false, 00:14:36.057 "num_base_bdevs": 4, 00:14:36.057 "num_base_bdevs_discovered": 3, 00:14:36.057 "num_base_bdevs_operational": 3, 00:14:36.057 "base_bdevs_list": [ 00:14:36.057 { 00:14:36.057 "name": null, 00:14:36.057 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:36.057 "is_configured": false, 00:14:36.057 "data_offset": 0, 00:14:36.057 "data_size": 65536 00:14:36.057 }, 00:14:36.057 { 00:14:36.057 "name": "BaseBdev2", 00:14:36.057 "uuid": "cb889b61-4d78-5ddf-a96e-ed18706417ea", 00:14:36.057 "is_configured": true, 00:14:36.057 "data_offset": 0, 00:14:36.057 "data_size": 65536 00:14:36.057 }, 00:14:36.057 { 00:14:36.057 "name": "BaseBdev3", 00:14:36.057 "uuid": "00b1ca3b-22c8-5e0f-a23d-84a4cc9eec30", 00:14:36.057 "is_configured": true, 00:14:36.057 "data_offset": 0, 00:14:36.057 "data_size": 65536 00:14:36.057 }, 00:14:36.057 { 00:14:36.057 "name": "BaseBdev4", 00:14:36.057 "uuid": "e889afd1-6f2d-549b-b949-74cd80b3245c", 00:14:36.057 "is_configured": true, 00:14:36.057 "data_offset": 0, 00:14:36.057 "data_size": 65536 00:14:36.057 } 00:14:36.057 ] 00:14:36.057 }' 00:14:36.057 19:12:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:36.057 19:12:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.317 19:12:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:36.317 19:12:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:36.318 19:12:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:36.318 19:12:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:36.318 19:12:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:36.318 19:12:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:36.318 19:12:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:36.318 19:12:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.318 19:12:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.318 19:12:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.577 19:12:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:36.577 "name": "raid_bdev1", 00:14:36.577 "uuid": "3eb31bc4-6fe0-4a45-a655-e93dbeb996cb", 00:14:36.577 "strip_size_kb": 0, 00:14:36.577 "state": "online", 00:14:36.577 "raid_level": "raid1", 00:14:36.577 "superblock": false, 00:14:36.577 "num_base_bdevs": 4, 00:14:36.577 "num_base_bdevs_discovered": 3, 00:14:36.577 "num_base_bdevs_operational": 3, 00:14:36.577 "base_bdevs_list": [ 00:14:36.577 { 00:14:36.577 "name": null, 00:14:36.577 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:36.577 "is_configured": false, 00:14:36.577 "data_offset": 0, 00:14:36.577 "data_size": 65536 00:14:36.577 }, 00:14:36.577 { 00:14:36.577 "name": "BaseBdev2", 00:14:36.577 "uuid": "cb889b61-4d78-5ddf-a96e-ed18706417ea", 00:14:36.577 "is_configured": true, 00:14:36.577 "data_offset": 0, 00:14:36.577 "data_size": 65536 00:14:36.577 }, 00:14:36.577 { 00:14:36.577 "name": "BaseBdev3", 00:14:36.577 "uuid": "00b1ca3b-22c8-5e0f-a23d-84a4cc9eec30", 00:14:36.577 "is_configured": true, 00:14:36.577 "data_offset": 0, 00:14:36.578 "data_size": 65536 00:14:36.578 }, 00:14:36.578 { 00:14:36.578 "name": "BaseBdev4", 00:14:36.578 "uuid": "e889afd1-6f2d-549b-b949-74cd80b3245c", 00:14:36.578 "is_configured": true, 00:14:36.578 "data_offset": 0, 00:14:36.578 "data_size": 65536 00:14:36.578 } 00:14:36.578 ] 00:14:36.578 }' 00:14:36.578 19:12:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:36.578 19:12:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:36.578 19:12:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:36.578 19:12:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:36.578 19:12:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:36.578 19:12:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.578 19:12:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.578 [2024-11-27 19:12:46.075511] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:36.578 [2024-11-27 19:12:46.089075] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09e40 00:14:36.578 19:12:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.578 19:12:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:36.578 [2024-11-27 19:12:46.090941] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:37.517 19:12:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:37.517 19:12:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:37.517 19:12:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:37.517 19:12:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:37.518 19:12:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:37.518 19:12:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:37.518 19:12:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:37.518 19:12:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.518 19:12:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.518 19:12:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.518 19:12:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:37.518 "name": "raid_bdev1", 00:14:37.518 "uuid": "3eb31bc4-6fe0-4a45-a655-e93dbeb996cb", 00:14:37.518 "strip_size_kb": 0, 00:14:37.518 "state": "online", 00:14:37.518 "raid_level": "raid1", 00:14:37.518 "superblock": false, 00:14:37.518 "num_base_bdevs": 4, 00:14:37.518 "num_base_bdevs_discovered": 4, 00:14:37.518 "num_base_bdevs_operational": 4, 00:14:37.518 "process": { 00:14:37.518 "type": "rebuild", 00:14:37.518 "target": "spare", 00:14:37.518 "progress": { 00:14:37.518 "blocks": 20480, 00:14:37.518 "percent": 31 00:14:37.518 } 00:14:37.518 }, 00:14:37.518 "base_bdevs_list": [ 00:14:37.518 { 00:14:37.518 "name": "spare", 00:14:37.518 "uuid": "ab045a89-43fa-5fa3-b5d3-ff3c6eccf66d", 00:14:37.518 "is_configured": true, 00:14:37.518 "data_offset": 0, 00:14:37.518 "data_size": 65536 00:14:37.518 }, 00:14:37.518 { 00:14:37.518 "name": "BaseBdev2", 00:14:37.518 "uuid": "cb889b61-4d78-5ddf-a96e-ed18706417ea", 00:14:37.518 "is_configured": true, 00:14:37.518 "data_offset": 0, 00:14:37.518 "data_size": 65536 00:14:37.518 }, 00:14:37.518 { 00:14:37.518 "name": "BaseBdev3", 00:14:37.518 "uuid": "00b1ca3b-22c8-5e0f-a23d-84a4cc9eec30", 00:14:37.518 "is_configured": true, 00:14:37.518 "data_offset": 0, 00:14:37.518 "data_size": 65536 00:14:37.518 }, 00:14:37.518 { 00:14:37.518 "name": "BaseBdev4", 00:14:37.518 "uuid": "e889afd1-6f2d-549b-b949-74cd80b3245c", 00:14:37.518 "is_configured": true, 00:14:37.518 "data_offset": 0, 00:14:37.518 "data_size": 65536 00:14:37.518 } 00:14:37.518 ] 00:14:37.518 }' 00:14:37.518 19:12:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:37.777 19:12:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:37.777 19:12:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:37.777 19:12:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:37.777 19:12:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:14:37.777 19:12:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:14:37.777 19:12:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:14:37.777 19:12:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:14:37.777 19:12:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:37.777 19:12:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.777 19:12:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.777 [2024-11-27 19:12:47.258656] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:37.778 [2024-11-27 19:12:47.295613] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000d09e40 00:14:37.778 19:12:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.778 19:12:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:14:37.778 19:12:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:14:37.778 19:12:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:37.778 19:12:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:37.778 19:12:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:37.778 19:12:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:37.778 19:12:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:37.778 19:12:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:37.778 19:12:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.778 19:12:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.778 19:12:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:37.778 19:12:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.778 19:12:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:37.778 "name": "raid_bdev1", 00:14:37.778 "uuid": "3eb31bc4-6fe0-4a45-a655-e93dbeb996cb", 00:14:37.778 "strip_size_kb": 0, 00:14:37.778 "state": "online", 00:14:37.778 "raid_level": "raid1", 00:14:37.778 "superblock": false, 00:14:37.778 "num_base_bdevs": 4, 00:14:37.778 "num_base_bdevs_discovered": 3, 00:14:37.778 "num_base_bdevs_operational": 3, 00:14:37.778 "process": { 00:14:37.778 "type": "rebuild", 00:14:37.778 "target": "spare", 00:14:37.778 "progress": { 00:14:37.778 "blocks": 24576, 00:14:37.778 "percent": 37 00:14:37.778 } 00:14:37.778 }, 00:14:37.778 "base_bdevs_list": [ 00:14:37.778 { 00:14:37.778 "name": "spare", 00:14:37.778 "uuid": "ab045a89-43fa-5fa3-b5d3-ff3c6eccf66d", 00:14:37.778 "is_configured": true, 00:14:37.778 "data_offset": 0, 00:14:37.778 "data_size": 65536 00:14:37.778 }, 00:14:37.778 { 00:14:37.778 "name": null, 00:14:37.778 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:37.778 "is_configured": false, 00:14:37.778 "data_offset": 0, 00:14:37.778 "data_size": 65536 00:14:37.778 }, 00:14:37.778 { 00:14:37.778 "name": "BaseBdev3", 00:14:37.778 "uuid": "00b1ca3b-22c8-5e0f-a23d-84a4cc9eec30", 00:14:37.778 "is_configured": true, 00:14:37.778 "data_offset": 0, 00:14:37.778 "data_size": 65536 00:14:37.778 }, 00:14:37.778 { 00:14:37.778 "name": "BaseBdev4", 00:14:37.778 "uuid": "e889afd1-6f2d-549b-b949-74cd80b3245c", 00:14:37.778 "is_configured": true, 00:14:37.778 "data_offset": 0, 00:14:37.778 "data_size": 65536 00:14:37.778 } 00:14:37.778 ] 00:14:37.778 }' 00:14:37.778 19:12:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:37.778 19:12:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:37.778 19:12:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:38.038 19:12:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:38.038 19:12:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=449 00:14:38.038 19:12:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:38.038 19:12:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:38.038 19:12:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:38.038 19:12:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:38.038 19:12:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:38.038 19:12:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:38.038 19:12:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:38.038 19:12:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:38.038 19:12:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.038 19:12:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.038 19:12:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.038 19:12:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:38.038 "name": "raid_bdev1", 00:14:38.038 "uuid": "3eb31bc4-6fe0-4a45-a655-e93dbeb996cb", 00:14:38.038 "strip_size_kb": 0, 00:14:38.038 "state": "online", 00:14:38.038 "raid_level": "raid1", 00:14:38.038 "superblock": false, 00:14:38.038 "num_base_bdevs": 4, 00:14:38.038 "num_base_bdevs_discovered": 3, 00:14:38.038 "num_base_bdevs_operational": 3, 00:14:38.038 "process": { 00:14:38.038 "type": "rebuild", 00:14:38.038 "target": "spare", 00:14:38.038 "progress": { 00:14:38.038 "blocks": 26624, 00:14:38.038 "percent": 40 00:14:38.038 } 00:14:38.038 }, 00:14:38.038 "base_bdevs_list": [ 00:14:38.038 { 00:14:38.038 "name": "spare", 00:14:38.038 "uuid": "ab045a89-43fa-5fa3-b5d3-ff3c6eccf66d", 00:14:38.038 "is_configured": true, 00:14:38.038 "data_offset": 0, 00:14:38.038 "data_size": 65536 00:14:38.038 }, 00:14:38.038 { 00:14:38.038 "name": null, 00:14:38.038 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:38.038 "is_configured": false, 00:14:38.038 "data_offset": 0, 00:14:38.038 "data_size": 65536 00:14:38.038 }, 00:14:38.038 { 00:14:38.038 "name": "BaseBdev3", 00:14:38.038 "uuid": "00b1ca3b-22c8-5e0f-a23d-84a4cc9eec30", 00:14:38.038 "is_configured": true, 00:14:38.038 "data_offset": 0, 00:14:38.038 "data_size": 65536 00:14:38.038 }, 00:14:38.038 { 00:14:38.038 "name": "BaseBdev4", 00:14:38.038 "uuid": "e889afd1-6f2d-549b-b949-74cd80b3245c", 00:14:38.038 "is_configured": true, 00:14:38.038 "data_offset": 0, 00:14:38.038 "data_size": 65536 00:14:38.038 } 00:14:38.038 ] 00:14:38.038 }' 00:14:38.038 19:12:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:38.038 19:12:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:38.038 19:12:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:38.038 19:12:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:38.038 19:12:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:38.979 19:12:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:38.979 19:12:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:38.979 19:12:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:38.979 19:12:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:38.979 19:12:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:38.979 19:12:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:38.979 19:12:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:38.979 19:12:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:38.979 19:12:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.979 19:12:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.240 19:12:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.240 19:12:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:39.240 "name": "raid_bdev1", 00:14:39.240 "uuid": "3eb31bc4-6fe0-4a45-a655-e93dbeb996cb", 00:14:39.240 "strip_size_kb": 0, 00:14:39.240 "state": "online", 00:14:39.240 "raid_level": "raid1", 00:14:39.240 "superblock": false, 00:14:39.240 "num_base_bdevs": 4, 00:14:39.240 "num_base_bdevs_discovered": 3, 00:14:39.240 "num_base_bdevs_operational": 3, 00:14:39.240 "process": { 00:14:39.240 "type": "rebuild", 00:14:39.240 "target": "spare", 00:14:39.240 "progress": { 00:14:39.240 "blocks": 51200, 00:14:39.240 "percent": 78 00:14:39.240 } 00:14:39.240 }, 00:14:39.240 "base_bdevs_list": [ 00:14:39.240 { 00:14:39.240 "name": "spare", 00:14:39.240 "uuid": "ab045a89-43fa-5fa3-b5d3-ff3c6eccf66d", 00:14:39.240 "is_configured": true, 00:14:39.240 "data_offset": 0, 00:14:39.240 "data_size": 65536 00:14:39.240 }, 00:14:39.240 { 00:14:39.240 "name": null, 00:14:39.240 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:39.240 "is_configured": false, 00:14:39.240 "data_offset": 0, 00:14:39.240 "data_size": 65536 00:14:39.240 }, 00:14:39.240 { 00:14:39.240 "name": "BaseBdev3", 00:14:39.240 "uuid": "00b1ca3b-22c8-5e0f-a23d-84a4cc9eec30", 00:14:39.240 "is_configured": true, 00:14:39.240 "data_offset": 0, 00:14:39.240 "data_size": 65536 00:14:39.240 }, 00:14:39.240 { 00:14:39.240 "name": "BaseBdev4", 00:14:39.240 "uuid": "e889afd1-6f2d-549b-b949-74cd80b3245c", 00:14:39.240 "is_configured": true, 00:14:39.240 "data_offset": 0, 00:14:39.240 "data_size": 65536 00:14:39.240 } 00:14:39.240 ] 00:14:39.240 }' 00:14:39.240 19:12:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:39.240 19:12:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:39.240 19:12:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:39.240 19:12:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:39.240 19:12:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:39.809 [2024-11-27 19:12:49.303380] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:39.809 [2024-11-27 19:12:49.303538] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:39.809 [2024-11-27 19:12:49.303605] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:40.378 19:12:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:40.378 19:12:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:40.378 19:12:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:40.378 19:12:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:40.378 19:12:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:40.378 19:12:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:40.378 19:12:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:40.378 19:12:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:40.378 19:12:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.378 19:12:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.378 19:12:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.378 19:12:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:40.378 "name": "raid_bdev1", 00:14:40.378 "uuid": "3eb31bc4-6fe0-4a45-a655-e93dbeb996cb", 00:14:40.378 "strip_size_kb": 0, 00:14:40.378 "state": "online", 00:14:40.378 "raid_level": "raid1", 00:14:40.378 "superblock": false, 00:14:40.378 "num_base_bdevs": 4, 00:14:40.378 "num_base_bdevs_discovered": 3, 00:14:40.378 "num_base_bdevs_operational": 3, 00:14:40.378 "base_bdevs_list": [ 00:14:40.378 { 00:14:40.378 "name": "spare", 00:14:40.378 "uuid": "ab045a89-43fa-5fa3-b5d3-ff3c6eccf66d", 00:14:40.378 "is_configured": true, 00:14:40.378 "data_offset": 0, 00:14:40.378 "data_size": 65536 00:14:40.378 }, 00:14:40.378 { 00:14:40.378 "name": null, 00:14:40.379 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:40.379 "is_configured": false, 00:14:40.379 "data_offset": 0, 00:14:40.379 "data_size": 65536 00:14:40.379 }, 00:14:40.379 { 00:14:40.379 "name": "BaseBdev3", 00:14:40.379 "uuid": "00b1ca3b-22c8-5e0f-a23d-84a4cc9eec30", 00:14:40.379 "is_configured": true, 00:14:40.379 "data_offset": 0, 00:14:40.379 "data_size": 65536 00:14:40.379 }, 00:14:40.379 { 00:14:40.379 "name": "BaseBdev4", 00:14:40.379 "uuid": "e889afd1-6f2d-549b-b949-74cd80b3245c", 00:14:40.379 "is_configured": true, 00:14:40.379 "data_offset": 0, 00:14:40.379 "data_size": 65536 00:14:40.379 } 00:14:40.379 ] 00:14:40.379 }' 00:14:40.379 19:12:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:40.379 19:12:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:40.379 19:12:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:40.379 19:12:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:40.379 19:12:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:14:40.379 19:12:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:40.379 19:12:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:40.379 19:12:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:40.379 19:12:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:40.379 19:12:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:40.379 19:12:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:40.379 19:12:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:40.379 19:12:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.379 19:12:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.379 19:12:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.379 19:12:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:40.379 "name": "raid_bdev1", 00:14:40.379 "uuid": "3eb31bc4-6fe0-4a45-a655-e93dbeb996cb", 00:14:40.379 "strip_size_kb": 0, 00:14:40.379 "state": "online", 00:14:40.379 "raid_level": "raid1", 00:14:40.379 "superblock": false, 00:14:40.379 "num_base_bdevs": 4, 00:14:40.379 "num_base_bdevs_discovered": 3, 00:14:40.379 "num_base_bdevs_operational": 3, 00:14:40.379 "base_bdevs_list": [ 00:14:40.379 { 00:14:40.379 "name": "spare", 00:14:40.379 "uuid": "ab045a89-43fa-5fa3-b5d3-ff3c6eccf66d", 00:14:40.379 "is_configured": true, 00:14:40.379 "data_offset": 0, 00:14:40.379 "data_size": 65536 00:14:40.379 }, 00:14:40.379 { 00:14:40.379 "name": null, 00:14:40.379 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:40.379 "is_configured": false, 00:14:40.379 "data_offset": 0, 00:14:40.379 "data_size": 65536 00:14:40.379 }, 00:14:40.379 { 00:14:40.379 "name": "BaseBdev3", 00:14:40.379 "uuid": "00b1ca3b-22c8-5e0f-a23d-84a4cc9eec30", 00:14:40.379 "is_configured": true, 00:14:40.379 "data_offset": 0, 00:14:40.379 "data_size": 65536 00:14:40.379 }, 00:14:40.379 { 00:14:40.379 "name": "BaseBdev4", 00:14:40.379 "uuid": "e889afd1-6f2d-549b-b949-74cd80b3245c", 00:14:40.379 "is_configured": true, 00:14:40.379 "data_offset": 0, 00:14:40.379 "data_size": 65536 00:14:40.379 } 00:14:40.379 ] 00:14:40.379 }' 00:14:40.379 19:12:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:40.379 19:12:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:40.379 19:12:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:40.639 19:12:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:40.639 19:12:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:40.639 19:12:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:40.639 19:12:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:40.639 19:12:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:40.639 19:12:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:40.639 19:12:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:40.639 19:12:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:40.639 19:12:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:40.639 19:12:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:40.639 19:12:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:40.639 19:12:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:40.639 19:12:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.639 19:12:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.639 19:12:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:40.639 19:12:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.639 19:12:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:40.639 "name": "raid_bdev1", 00:14:40.639 "uuid": "3eb31bc4-6fe0-4a45-a655-e93dbeb996cb", 00:14:40.639 "strip_size_kb": 0, 00:14:40.639 "state": "online", 00:14:40.639 "raid_level": "raid1", 00:14:40.639 "superblock": false, 00:14:40.639 "num_base_bdevs": 4, 00:14:40.639 "num_base_bdevs_discovered": 3, 00:14:40.639 "num_base_bdevs_operational": 3, 00:14:40.639 "base_bdevs_list": [ 00:14:40.639 { 00:14:40.639 "name": "spare", 00:14:40.639 "uuid": "ab045a89-43fa-5fa3-b5d3-ff3c6eccf66d", 00:14:40.639 "is_configured": true, 00:14:40.639 "data_offset": 0, 00:14:40.639 "data_size": 65536 00:14:40.639 }, 00:14:40.639 { 00:14:40.639 "name": null, 00:14:40.639 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:40.639 "is_configured": false, 00:14:40.639 "data_offset": 0, 00:14:40.639 "data_size": 65536 00:14:40.639 }, 00:14:40.639 { 00:14:40.639 "name": "BaseBdev3", 00:14:40.639 "uuid": "00b1ca3b-22c8-5e0f-a23d-84a4cc9eec30", 00:14:40.639 "is_configured": true, 00:14:40.639 "data_offset": 0, 00:14:40.639 "data_size": 65536 00:14:40.639 }, 00:14:40.639 { 00:14:40.639 "name": "BaseBdev4", 00:14:40.639 "uuid": "e889afd1-6f2d-549b-b949-74cd80b3245c", 00:14:40.639 "is_configured": true, 00:14:40.639 "data_offset": 0, 00:14:40.639 "data_size": 65536 00:14:40.639 } 00:14:40.639 ] 00:14:40.639 }' 00:14:40.639 19:12:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:40.639 19:12:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.900 19:12:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:40.900 19:12:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.900 19:12:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.900 [2024-11-27 19:12:50.514255] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:40.900 [2024-11-27 19:12:50.514328] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:40.900 [2024-11-27 19:12:50.514428] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:40.900 [2024-11-27 19:12:50.514502] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:40.900 [2024-11-27 19:12:50.514511] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:40.900 19:12:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.900 19:12:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:40.900 19:12:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:14:40.900 19:12:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.900 19:12:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.900 19:12:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.161 19:12:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:41.161 19:12:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:41.161 19:12:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:14:41.161 19:12:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:14:41.161 19:12:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:41.161 19:12:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:14:41.161 19:12:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:41.161 19:12:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:41.161 19:12:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:41.161 19:12:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:14:41.161 19:12:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:41.161 19:12:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:41.161 19:12:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:14:41.161 /dev/nbd0 00:14:41.421 19:12:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:41.421 19:12:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:41.421 19:12:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:41.421 19:12:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:14:41.421 19:12:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:41.421 19:12:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:41.421 19:12:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:41.421 19:12:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:14:41.421 19:12:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:41.421 19:12:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:41.421 19:12:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:41.421 1+0 records in 00:14:41.421 1+0 records out 00:14:41.421 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000338052 s, 12.1 MB/s 00:14:41.421 19:12:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:41.421 19:12:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:14:41.421 19:12:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:41.421 19:12:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:41.421 19:12:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:14:41.421 19:12:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:41.421 19:12:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:41.421 19:12:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:14:41.421 /dev/nbd1 00:14:41.421 19:12:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:41.682 19:12:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:41.682 19:12:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:41.682 19:12:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:14:41.682 19:12:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:41.682 19:12:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:41.682 19:12:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:41.682 19:12:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:14:41.682 19:12:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:41.682 19:12:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:41.682 19:12:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:41.682 1+0 records in 00:14:41.682 1+0 records out 00:14:41.682 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000278165 s, 14.7 MB/s 00:14:41.682 19:12:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:41.682 19:12:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:14:41.682 19:12:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:41.682 19:12:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:41.682 19:12:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:14:41.682 19:12:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:41.682 19:12:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:41.682 19:12:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:14:41.682 19:12:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:14:41.682 19:12:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:41.682 19:12:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:41.682 19:12:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:41.682 19:12:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:14:41.682 19:12:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:41.682 19:12:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:41.941 19:12:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:41.941 19:12:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:41.941 19:12:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:41.941 19:12:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:41.941 19:12:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:41.941 19:12:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:41.941 19:12:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:41.941 19:12:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:41.941 19:12:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:41.941 19:12:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:42.201 19:12:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:42.201 19:12:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:42.201 19:12:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:42.201 19:12:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:42.201 19:12:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:42.201 19:12:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:42.201 19:12:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:42.201 19:12:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:42.201 19:12:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:14:42.201 19:12:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 77635 00:14:42.201 19:12:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 77635 ']' 00:14:42.201 19:12:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 77635 00:14:42.201 19:12:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:14:42.201 19:12:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:42.201 19:12:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77635 00:14:42.201 19:12:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:42.201 19:12:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:42.201 19:12:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77635' 00:14:42.201 killing process with pid 77635 00:14:42.201 19:12:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@973 -- # kill 77635 00:14:42.201 Received shutdown signal, test time was about 60.000000 seconds 00:14:42.201 00:14:42.201 Latency(us) 00:14:42.201 [2024-11-27T19:12:51.837Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:42.201 [2024-11-27T19:12:51.837Z] =================================================================================================================== 00:14:42.201 [2024-11-27T19:12:51.837Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:42.201 [2024-11-27 19:12:51.688519] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:42.201 19:12:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@978 -- # wait 77635 00:14:42.770 [2024-11-27 19:12:52.144285] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:43.733 19:12:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:14:43.733 00:14:43.733 real 0m16.783s 00:14:43.733 user 0m19.315s 00:14:43.733 sys 0m2.986s 00:14:43.733 19:12:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:43.733 19:12:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.733 ************************************ 00:14:43.733 END TEST raid_rebuild_test 00:14:43.733 ************************************ 00:14:43.733 19:12:53 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false true 00:14:43.733 19:12:53 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:14:43.733 19:12:53 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:43.733 19:12:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:43.733 ************************************ 00:14:43.733 START TEST raid_rebuild_test_sb 00:14:43.733 ************************************ 00:14:43.733 19:12:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 true false true 00:14:43.733 19:12:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:14:43.733 19:12:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:14:43.733 19:12:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:14:43.733 19:12:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:14:43.733 19:12:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:43.733 19:12:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:43.733 19:12:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:43.733 19:12:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:43.733 19:12:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:43.733 19:12:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:43.733 19:12:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:43.733 19:12:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:43.733 19:12:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:43.733 19:12:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:14:43.733 19:12:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:43.733 19:12:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:43.733 19:12:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:14:43.733 19:12:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:43.733 19:12:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:43.733 19:12:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:43.733 19:12:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:43.733 19:12:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:43.733 19:12:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:43.733 19:12:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:43.733 19:12:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:43.733 19:12:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:43.733 19:12:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:14:43.733 19:12:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:14:43.733 19:12:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:14:43.733 19:12:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:14:43.733 19:12:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=78076 00:14:43.733 19:12:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 78076 00:14:43.733 19:12:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:43.733 19:12:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 78076 ']' 00:14:43.733 19:12:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:43.733 19:12:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:43.733 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:43.733 19:12:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:43.733 19:12:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:43.733 19:12:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:43.993 [2024-11-27 19:12:53.385593] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:14:43.993 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:43.993 Zero copy mechanism will not be used. 00:14:43.993 [2024-11-27 19:12:53.385830] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78076 ] 00:14:43.993 [2024-11-27 19:12:53.566011] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:44.253 [2024-11-27 19:12:53.675080] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:44.253 [2024-11-27 19:12:53.868790] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:44.253 [2024-11-27 19:12:53.868828] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:44.823 19:12:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:44.823 19:12:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:14:44.823 19:12:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:44.823 19:12:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:44.823 19:12:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.823 19:12:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:44.823 BaseBdev1_malloc 00:14:44.823 19:12:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.823 19:12:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:44.823 19:12:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.823 19:12:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:44.823 [2024-11-27 19:12:54.240471] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:44.823 [2024-11-27 19:12:54.240576] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:44.823 [2024-11-27 19:12:54.240601] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:44.824 [2024-11-27 19:12:54.240613] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:44.824 [2024-11-27 19:12:54.242615] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:44.824 [2024-11-27 19:12:54.242657] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:44.824 BaseBdev1 00:14:44.824 19:12:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.824 19:12:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:44.824 19:12:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:44.824 19:12:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.824 19:12:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:44.824 BaseBdev2_malloc 00:14:44.824 19:12:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.824 19:12:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:44.824 19:12:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.824 19:12:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:44.824 [2024-11-27 19:12:54.293975] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:44.824 [2024-11-27 19:12:54.294033] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:44.824 [2024-11-27 19:12:54.294054] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:44.824 [2024-11-27 19:12:54.294064] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:44.824 [2024-11-27 19:12:54.296088] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:44.824 [2024-11-27 19:12:54.296126] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:44.824 BaseBdev2 00:14:44.824 19:12:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.824 19:12:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:44.824 19:12:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:44.824 19:12:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.824 19:12:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:44.824 BaseBdev3_malloc 00:14:44.824 19:12:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.824 19:12:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:14:44.824 19:12:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.824 19:12:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:44.824 [2024-11-27 19:12:54.382847] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:14:44.824 [2024-11-27 19:12:54.382899] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:44.824 [2024-11-27 19:12:54.382920] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:44.824 [2024-11-27 19:12:54.382930] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:44.824 [2024-11-27 19:12:54.384922] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:44.824 [2024-11-27 19:12:54.385025] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:44.824 BaseBdev3 00:14:44.824 19:12:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.824 19:12:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:44.824 19:12:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:44.824 19:12:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.824 19:12:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:44.824 BaseBdev4_malloc 00:14:44.824 19:12:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.824 19:12:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:14:44.824 19:12:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.824 19:12:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:44.824 [2024-11-27 19:12:54.436051] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:14:44.824 [2024-11-27 19:12:54.436164] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:44.824 [2024-11-27 19:12:54.436188] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:44.824 [2024-11-27 19:12:54.436198] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:44.824 [2024-11-27 19:12:54.438181] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:44.824 [2024-11-27 19:12:54.438222] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:44.824 BaseBdev4 00:14:44.824 19:12:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.824 19:12:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:44.824 19:12:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.824 19:12:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.086 spare_malloc 00:14:45.086 19:12:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.086 19:12:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:45.086 19:12:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.086 19:12:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.086 spare_delay 00:14:45.086 19:12:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.086 19:12:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:45.086 19:12:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.086 19:12:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.086 [2024-11-27 19:12:54.500885] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:45.086 [2024-11-27 19:12:54.500932] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:45.086 [2024-11-27 19:12:54.500948] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:14:45.086 [2024-11-27 19:12:54.500958] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:45.086 [2024-11-27 19:12:54.502894] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:45.086 [2024-11-27 19:12:54.502980] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:45.086 spare 00:14:45.086 19:12:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.086 19:12:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:14:45.086 19:12:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.086 19:12:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.086 [2024-11-27 19:12:54.512908] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:45.086 [2024-11-27 19:12:54.514640] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:45.086 [2024-11-27 19:12:54.514703] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:45.086 [2024-11-27 19:12:54.514767] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:45.086 [2024-11-27 19:12:54.514949] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:45.086 [2024-11-27 19:12:54.514964] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:45.086 [2024-11-27 19:12:54.515197] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:45.086 [2024-11-27 19:12:54.515379] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:45.086 [2024-11-27 19:12:54.515389] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:45.086 [2024-11-27 19:12:54.515527] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:45.086 19:12:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.086 19:12:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:14:45.086 19:12:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:45.086 19:12:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:45.086 19:12:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:45.086 19:12:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:45.086 19:12:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:45.086 19:12:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:45.086 19:12:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:45.086 19:12:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:45.086 19:12:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:45.086 19:12:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:45.086 19:12:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:45.086 19:12:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.086 19:12:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.086 19:12:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.086 19:12:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:45.086 "name": "raid_bdev1", 00:14:45.086 "uuid": "8d22c699-fb09-4b30-8718-12f4c8c1920e", 00:14:45.086 "strip_size_kb": 0, 00:14:45.086 "state": "online", 00:14:45.086 "raid_level": "raid1", 00:14:45.086 "superblock": true, 00:14:45.086 "num_base_bdevs": 4, 00:14:45.086 "num_base_bdevs_discovered": 4, 00:14:45.086 "num_base_bdevs_operational": 4, 00:14:45.086 "base_bdevs_list": [ 00:14:45.086 { 00:14:45.086 "name": "BaseBdev1", 00:14:45.086 "uuid": "a276ec93-dbb0-522d-8d49-d68b0d72e422", 00:14:45.086 "is_configured": true, 00:14:45.086 "data_offset": 2048, 00:14:45.086 "data_size": 63488 00:14:45.086 }, 00:14:45.086 { 00:14:45.086 "name": "BaseBdev2", 00:14:45.086 "uuid": "d673ac49-a8b3-56ff-bf0d-7f89be6ddb9e", 00:14:45.086 "is_configured": true, 00:14:45.086 "data_offset": 2048, 00:14:45.086 "data_size": 63488 00:14:45.086 }, 00:14:45.086 { 00:14:45.086 "name": "BaseBdev3", 00:14:45.086 "uuid": "55dfba64-15a5-5834-9b99-5769a08de9f5", 00:14:45.086 "is_configured": true, 00:14:45.086 "data_offset": 2048, 00:14:45.086 "data_size": 63488 00:14:45.086 }, 00:14:45.086 { 00:14:45.086 "name": "BaseBdev4", 00:14:45.086 "uuid": "1f1c4392-f7d9-50ec-b500-69aa5d51fc49", 00:14:45.086 "is_configured": true, 00:14:45.086 "data_offset": 2048, 00:14:45.086 "data_size": 63488 00:14:45.086 } 00:14:45.086 ] 00:14:45.086 }' 00:14:45.086 19:12:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:45.086 19:12:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.656 19:12:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:45.656 19:12:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:45.656 19:12:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.656 19:12:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.656 [2024-11-27 19:12:54.988417] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:45.656 19:12:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.656 19:12:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:14:45.656 19:12:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:45.656 19:12:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.656 19:12:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.656 19:12:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:45.656 19:12:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.656 19:12:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:14:45.656 19:12:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:14:45.656 19:12:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:14:45.656 19:12:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:14:45.656 19:12:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:14:45.656 19:12:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:45.656 19:12:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:14:45.656 19:12:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:45.656 19:12:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:45.656 19:12:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:45.656 19:12:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:14:45.656 19:12:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:45.656 19:12:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:45.657 19:12:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:14:45.657 [2024-11-27 19:12:55.259684] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:14:45.657 /dev/nbd0 00:14:45.917 19:12:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:45.917 19:12:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:45.917 19:12:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:45.917 19:12:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:14:45.917 19:12:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:45.917 19:12:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:45.917 19:12:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:45.917 19:12:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:14:45.917 19:12:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:45.917 19:12:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:45.917 19:12:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:45.917 1+0 records in 00:14:45.917 1+0 records out 00:14:45.917 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000317189 s, 12.9 MB/s 00:14:45.917 19:12:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:45.917 19:12:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:14:45.917 19:12:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:45.917 19:12:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:45.917 19:12:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:14:45.917 19:12:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:45.917 19:12:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:45.917 19:12:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:14:45.917 19:12:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:14:45.917 19:12:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:14:51.199 63488+0 records in 00:14:51.199 63488+0 records out 00:14:51.199 32505856 bytes (33 MB, 31 MiB) copied, 5.48742 s, 5.9 MB/s 00:14:51.199 19:13:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:51.199 19:13:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:51.199 19:13:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:51.199 19:13:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:51.199 19:13:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:14:51.199 19:13:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:51.199 19:13:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:51.460 [2024-11-27 19:13:00.998226] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:51.460 19:13:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:51.460 19:13:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:51.460 19:13:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:51.460 19:13:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:51.460 19:13:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:51.460 19:13:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:51.460 19:13:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:51.460 19:13:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:51.460 19:13:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:51.460 19:13:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.460 19:13:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.460 [2024-11-27 19:13:01.034904] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:51.460 19:13:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.460 19:13:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:51.460 19:13:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:51.460 19:13:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:51.460 19:13:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:51.460 19:13:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:51.460 19:13:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:51.460 19:13:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:51.460 19:13:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:51.460 19:13:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:51.460 19:13:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:51.460 19:13:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:51.460 19:13:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.460 19:13:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.460 19:13:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:51.460 19:13:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.720 19:13:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:51.720 "name": "raid_bdev1", 00:14:51.720 "uuid": "8d22c699-fb09-4b30-8718-12f4c8c1920e", 00:14:51.720 "strip_size_kb": 0, 00:14:51.720 "state": "online", 00:14:51.720 "raid_level": "raid1", 00:14:51.720 "superblock": true, 00:14:51.720 "num_base_bdevs": 4, 00:14:51.720 "num_base_bdevs_discovered": 3, 00:14:51.720 "num_base_bdevs_operational": 3, 00:14:51.720 "base_bdevs_list": [ 00:14:51.720 { 00:14:51.720 "name": null, 00:14:51.720 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:51.720 "is_configured": false, 00:14:51.720 "data_offset": 0, 00:14:51.720 "data_size": 63488 00:14:51.720 }, 00:14:51.720 { 00:14:51.720 "name": "BaseBdev2", 00:14:51.720 "uuid": "d673ac49-a8b3-56ff-bf0d-7f89be6ddb9e", 00:14:51.720 "is_configured": true, 00:14:51.720 "data_offset": 2048, 00:14:51.720 "data_size": 63488 00:14:51.720 }, 00:14:51.720 { 00:14:51.720 "name": "BaseBdev3", 00:14:51.720 "uuid": "55dfba64-15a5-5834-9b99-5769a08de9f5", 00:14:51.720 "is_configured": true, 00:14:51.720 "data_offset": 2048, 00:14:51.720 "data_size": 63488 00:14:51.720 }, 00:14:51.720 { 00:14:51.720 "name": "BaseBdev4", 00:14:51.720 "uuid": "1f1c4392-f7d9-50ec-b500-69aa5d51fc49", 00:14:51.720 "is_configured": true, 00:14:51.720 "data_offset": 2048, 00:14:51.720 "data_size": 63488 00:14:51.720 } 00:14:51.720 ] 00:14:51.720 }' 00:14:51.720 19:13:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:51.720 19:13:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.981 19:13:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:51.981 19:13:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.981 19:13:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.981 [2024-11-27 19:13:01.462174] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:51.981 [2024-11-27 19:13:01.477290] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3500 00:14:51.981 19:13:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.981 19:13:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:51.981 [2024-11-27 19:13:01.479085] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:52.920 19:13:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:52.920 19:13:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:52.920 19:13:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:52.920 19:13:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:52.920 19:13:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:52.920 19:13:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.921 19:13:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:52.921 19:13:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.921 19:13:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.921 19:13:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.921 19:13:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:52.921 "name": "raid_bdev1", 00:14:52.921 "uuid": "8d22c699-fb09-4b30-8718-12f4c8c1920e", 00:14:52.921 "strip_size_kb": 0, 00:14:52.921 "state": "online", 00:14:52.921 "raid_level": "raid1", 00:14:52.921 "superblock": true, 00:14:52.921 "num_base_bdevs": 4, 00:14:52.921 "num_base_bdevs_discovered": 4, 00:14:52.921 "num_base_bdevs_operational": 4, 00:14:52.921 "process": { 00:14:52.921 "type": "rebuild", 00:14:52.921 "target": "spare", 00:14:52.921 "progress": { 00:14:52.921 "blocks": 20480, 00:14:52.921 "percent": 32 00:14:52.921 } 00:14:52.921 }, 00:14:52.921 "base_bdevs_list": [ 00:14:52.921 { 00:14:52.921 "name": "spare", 00:14:52.921 "uuid": "d696f6cb-3911-5ffd-9098-f32226a06ed4", 00:14:52.921 "is_configured": true, 00:14:52.921 "data_offset": 2048, 00:14:52.921 "data_size": 63488 00:14:52.921 }, 00:14:52.921 { 00:14:52.921 "name": "BaseBdev2", 00:14:52.921 "uuid": "d673ac49-a8b3-56ff-bf0d-7f89be6ddb9e", 00:14:52.921 "is_configured": true, 00:14:52.921 "data_offset": 2048, 00:14:52.921 "data_size": 63488 00:14:52.921 }, 00:14:52.921 { 00:14:52.921 "name": "BaseBdev3", 00:14:52.921 "uuid": "55dfba64-15a5-5834-9b99-5769a08de9f5", 00:14:52.921 "is_configured": true, 00:14:52.921 "data_offset": 2048, 00:14:52.921 "data_size": 63488 00:14:52.921 }, 00:14:52.921 { 00:14:52.921 "name": "BaseBdev4", 00:14:52.921 "uuid": "1f1c4392-f7d9-50ec-b500-69aa5d51fc49", 00:14:52.921 "is_configured": true, 00:14:52.921 "data_offset": 2048, 00:14:52.921 "data_size": 63488 00:14:52.921 } 00:14:52.921 ] 00:14:52.921 }' 00:14:52.921 19:13:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:53.181 19:13:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:53.181 19:13:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:53.181 19:13:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:53.181 19:13:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:53.181 19:13:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.181 19:13:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.181 [2024-11-27 19:13:02.614826] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:53.181 [2024-11-27 19:13:02.683776] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:53.181 [2024-11-27 19:13:02.683899] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:53.181 [2024-11-27 19:13:02.683917] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:53.181 [2024-11-27 19:13:02.683926] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:53.181 19:13:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.181 19:13:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:53.181 19:13:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:53.181 19:13:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:53.181 19:13:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:53.181 19:13:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:53.181 19:13:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:53.181 19:13:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:53.181 19:13:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:53.181 19:13:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:53.181 19:13:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:53.181 19:13:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:53.181 19:13:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:53.181 19:13:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.181 19:13:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.181 19:13:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.181 19:13:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:53.181 "name": "raid_bdev1", 00:14:53.181 "uuid": "8d22c699-fb09-4b30-8718-12f4c8c1920e", 00:14:53.181 "strip_size_kb": 0, 00:14:53.181 "state": "online", 00:14:53.181 "raid_level": "raid1", 00:14:53.181 "superblock": true, 00:14:53.181 "num_base_bdevs": 4, 00:14:53.181 "num_base_bdevs_discovered": 3, 00:14:53.181 "num_base_bdevs_operational": 3, 00:14:53.181 "base_bdevs_list": [ 00:14:53.181 { 00:14:53.181 "name": null, 00:14:53.181 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:53.181 "is_configured": false, 00:14:53.181 "data_offset": 0, 00:14:53.181 "data_size": 63488 00:14:53.181 }, 00:14:53.181 { 00:14:53.181 "name": "BaseBdev2", 00:14:53.181 "uuid": "d673ac49-a8b3-56ff-bf0d-7f89be6ddb9e", 00:14:53.181 "is_configured": true, 00:14:53.181 "data_offset": 2048, 00:14:53.181 "data_size": 63488 00:14:53.181 }, 00:14:53.181 { 00:14:53.181 "name": "BaseBdev3", 00:14:53.181 "uuid": "55dfba64-15a5-5834-9b99-5769a08de9f5", 00:14:53.181 "is_configured": true, 00:14:53.181 "data_offset": 2048, 00:14:53.181 "data_size": 63488 00:14:53.181 }, 00:14:53.181 { 00:14:53.181 "name": "BaseBdev4", 00:14:53.181 "uuid": "1f1c4392-f7d9-50ec-b500-69aa5d51fc49", 00:14:53.181 "is_configured": true, 00:14:53.181 "data_offset": 2048, 00:14:53.181 "data_size": 63488 00:14:53.181 } 00:14:53.181 ] 00:14:53.181 }' 00:14:53.181 19:13:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:53.181 19:13:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.752 19:13:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:53.752 19:13:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:53.752 19:13:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:53.752 19:13:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:53.752 19:13:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:53.752 19:13:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:53.752 19:13:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:53.752 19:13:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.752 19:13:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.752 19:13:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.752 19:13:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:53.752 "name": "raid_bdev1", 00:14:53.752 "uuid": "8d22c699-fb09-4b30-8718-12f4c8c1920e", 00:14:53.752 "strip_size_kb": 0, 00:14:53.752 "state": "online", 00:14:53.752 "raid_level": "raid1", 00:14:53.752 "superblock": true, 00:14:53.752 "num_base_bdevs": 4, 00:14:53.752 "num_base_bdevs_discovered": 3, 00:14:53.752 "num_base_bdevs_operational": 3, 00:14:53.752 "base_bdevs_list": [ 00:14:53.752 { 00:14:53.752 "name": null, 00:14:53.752 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:53.752 "is_configured": false, 00:14:53.752 "data_offset": 0, 00:14:53.752 "data_size": 63488 00:14:53.752 }, 00:14:53.752 { 00:14:53.752 "name": "BaseBdev2", 00:14:53.752 "uuid": "d673ac49-a8b3-56ff-bf0d-7f89be6ddb9e", 00:14:53.752 "is_configured": true, 00:14:53.752 "data_offset": 2048, 00:14:53.752 "data_size": 63488 00:14:53.752 }, 00:14:53.752 { 00:14:53.752 "name": "BaseBdev3", 00:14:53.752 "uuid": "55dfba64-15a5-5834-9b99-5769a08de9f5", 00:14:53.752 "is_configured": true, 00:14:53.752 "data_offset": 2048, 00:14:53.752 "data_size": 63488 00:14:53.752 }, 00:14:53.752 { 00:14:53.752 "name": "BaseBdev4", 00:14:53.752 "uuid": "1f1c4392-f7d9-50ec-b500-69aa5d51fc49", 00:14:53.752 "is_configured": true, 00:14:53.752 "data_offset": 2048, 00:14:53.752 "data_size": 63488 00:14:53.752 } 00:14:53.752 ] 00:14:53.752 }' 00:14:53.752 19:13:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:53.752 19:13:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:53.752 19:13:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:53.752 19:13:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:53.752 19:13:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:53.752 19:13:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.752 19:13:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.752 [2024-11-27 19:13:03.231769] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:53.752 [2024-11-27 19:13:03.245824] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca35d0 00:14:53.752 19:13:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.752 19:13:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:53.752 [2024-11-27 19:13:03.247575] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:54.692 19:13:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:54.692 19:13:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:54.692 19:13:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:54.692 19:13:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:54.692 19:13:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:54.692 19:13:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:54.692 19:13:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:54.692 19:13:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.692 19:13:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.692 19:13:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.692 19:13:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:54.692 "name": "raid_bdev1", 00:14:54.692 "uuid": "8d22c699-fb09-4b30-8718-12f4c8c1920e", 00:14:54.692 "strip_size_kb": 0, 00:14:54.692 "state": "online", 00:14:54.692 "raid_level": "raid1", 00:14:54.692 "superblock": true, 00:14:54.692 "num_base_bdevs": 4, 00:14:54.692 "num_base_bdevs_discovered": 4, 00:14:54.692 "num_base_bdevs_operational": 4, 00:14:54.692 "process": { 00:14:54.692 "type": "rebuild", 00:14:54.692 "target": "spare", 00:14:54.692 "progress": { 00:14:54.692 "blocks": 20480, 00:14:54.692 "percent": 32 00:14:54.692 } 00:14:54.692 }, 00:14:54.692 "base_bdevs_list": [ 00:14:54.692 { 00:14:54.692 "name": "spare", 00:14:54.692 "uuid": "d696f6cb-3911-5ffd-9098-f32226a06ed4", 00:14:54.692 "is_configured": true, 00:14:54.692 "data_offset": 2048, 00:14:54.692 "data_size": 63488 00:14:54.692 }, 00:14:54.692 { 00:14:54.692 "name": "BaseBdev2", 00:14:54.692 "uuid": "d673ac49-a8b3-56ff-bf0d-7f89be6ddb9e", 00:14:54.692 "is_configured": true, 00:14:54.692 "data_offset": 2048, 00:14:54.692 "data_size": 63488 00:14:54.692 }, 00:14:54.693 { 00:14:54.693 "name": "BaseBdev3", 00:14:54.693 "uuid": "55dfba64-15a5-5834-9b99-5769a08de9f5", 00:14:54.693 "is_configured": true, 00:14:54.693 "data_offset": 2048, 00:14:54.693 "data_size": 63488 00:14:54.693 }, 00:14:54.693 { 00:14:54.693 "name": "BaseBdev4", 00:14:54.693 "uuid": "1f1c4392-f7d9-50ec-b500-69aa5d51fc49", 00:14:54.693 "is_configured": true, 00:14:54.693 "data_offset": 2048, 00:14:54.693 "data_size": 63488 00:14:54.693 } 00:14:54.693 ] 00:14:54.693 }' 00:14:54.693 19:13:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:54.953 19:13:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:54.953 19:13:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:54.953 19:13:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:54.953 19:13:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:14:54.953 19:13:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:14:54.953 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:14:54.953 19:13:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:14:54.953 19:13:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:14:54.953 19:13:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:14:54.953 19:13:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:54.953 19:13:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.953 19:13:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.953 [2024-11-27 19:13:04.415587] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:54.953 [2024-11-27 19:13:04.552301] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000ca35d0 00:14:54.953 19:13:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.953 19:13:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:14:54.953 19:13:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:14:54.953 19:13:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:54.953 19:13:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:54.953 19:13:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:54.953 19:13:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:54.953 19:13:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:54.953 19:13:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:54.953 19:13:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:54.953 19:13:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.953 19:13:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.953 19:13:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.213 19:13:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:55.213 "name": "raid_bdev1", 00:14:55.213 "uuid": "8d22c699-fb09-4b30-8718-12f4c8c1920e", 00:14:55.213 "strip_size_kb": 0, 00:14:55.213 "state": "online", 00:14:55.213 "raid_level": "raid1", 00:14:55.213 "superblock": true, 00:14:55.213 "num_base_bdevs": 4, 00:14:55.213 "num_base_bdevs_discovered": 3, 00:14:55.213 "num_base_bdevs_operational": 3, 00:14:55.213 "process": { 00:14:55.213 "type": "rebuild", 00:14:55.213 "target": "spare", 00:14:55.213 "progress": { 00:14:55.213 "blocks": 24576, 00:14:55.213 "percent": 38 00:14:55.213 } 00:14:55.213 }, 00:14:55.213 "base_bdevs_list": [ 00:14:55.213 { 00:14:55.213 "name": "spare", 00:14:55.213 "uuid": "d696f6cb-3911-5ffd-9098-f32226a06ed4", 00:14:55.213 "is_configured": true, 00:14:55.213 "data_offset": 2048, 00:14:55.213 "data_size": 63488 00:14:55.213 }, 00:14:55.213 { 00:14:55.213 "name": null, 00:14:55.213 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:55.213 "is_configured": false, 00:14:55.213 "data_offset": 0, 00:14:55.213 "data_size": 63488 00:14:55.213 }, 00:14:55.213 { 00:14:55.213 "name": "BaseBdev3", 00:14:55.213 "uuid": "55dfba64-15a5-5834-9b99-5769a08de9f5", 00:14:55.213 "is_configured": true, 00:14:55.213 "data_offset": 2048, 00:14:55.213 "data_size": 63488 00:14:55.213 }, 00:14:55.213 { 00:14:55.213 "name": "BaseBdev4", 00:14:55.213 "uuid": "1f1c4392-f7d9-50ec-b500-69aa5d51fc49", 00:14:55.213 "is_configured": true, 00:14:55.213 "data_offset": 2048, 00:14:55.213 "data_size": 63488 00:14:55.213 } 00:14:55.213 ] 00:14:55.213 }' 00:14:55.213 19:13:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:55.213 19:13:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:55.213 19:13:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:55.213 19:13:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:55.213 19:13:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=466 00:14:55.213 19:13:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:55.213 19:13:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:55.213 19:13:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:55.213 19:13:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:55.213 19:13:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:55.213 19:13:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:55.213 19:13:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:55.213 19:13:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:55.213 19:13:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.213 19:13:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.213 19:13:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.213 19:13:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:55.213 "name": "raid_bdev1", 00:14:55.213 "uuid": "8d22c699-fb09-4b30-8718-12f4c8c1920e", 00:14:55.213 "strip_size_kb": 0, 00:14:55.213 "state": "online", 00:14:55.213 "raid_level": "raid1", 00:14:55.213 "superblock": true, 00:14:55.213 "num_base_bdevs": 4, 00:14:55.213 "num_base_bdevs_discovered": 3, 00:14:55.213 "num_base_bdevs_operational": 3, 00:14:55.213 "process": { 00:14:55.213 "type": "rebuild", 00:14:55.213 "target": "spare", 00:14:55.213 "progress": { 00:14:55.213 "blocks": 26624, 00:14:55.213 "percent": 41 00:14:55.213 } 00:14:55.213 }, 00:14:55.213 "base_bdevs_list": [ 00:14:55.213 { 00:14:55.213 "name": "spare", 00:14:55.213 "uuid": "d696f6cb-3911-5ffd-9098-f32226a06ed4", 00:14:55.213 "is_configured": true, 00:14:55.213 "data_offset": 2048, 00:14:55.213 "data_size": 63488 00:14:55.213 }, 00:14:55.213 { 00:14:55.213 "name": null, 00:14:55.213 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:55.213 "is_configured": false, 00:14:55.213 "data_offset": 0, 00:14:55.213 "data_size": 63488 00:14:55.213 }, 00:14:55.213 { 00:14:55.213 "name": "BaseBdev3", 00:14:55.213 "uuid": "55dfba64-15a5-5834-9b99-5769a08de9f5", 00:14:55.213 "is_configured": true, 00:14:55.213 "data_offset": 2048, 00:14:55.213 "data_size": 63488 00:14:55.213 }, 00:14:55.213 { 00:14:55.213 "name": "BaseBdev4", 00:14:55.213 "uuid": "1f1c4392-f7d9-50ec-b500-69aa5d51fc49", 00:14:55.213 "is_configured": true, 00:14:55.213 "data_offset": 2048, 00:14:55.213 "data_size": 63488 00:14:55.213 } 00:14:55.213 ] 00:14:55.213 }' 00:14:55.213 19:13:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:55.213 19:13:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:55.213 19:13:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:55.213 19:13:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:55.213 19:13:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:56.636 19:13:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:56.636 19:13:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:56.636 19:13:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:56.636 19:13:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:56.636 19:13:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:56.636 19:13:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:56.636 19:13:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.636 19:13:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:56.636 19:13:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.636 19:13:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.636 19:13:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.636 19:13:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:56.636 "name": "raid_bdev1", 00:14:56.636 "uuid": "8d22c699-fb09-4b30-8718-12f4c8c1920e", 00:14:56.636 "strip_size_kb": 0, 00:14:56.636 "state": "online", 00:14:56.636 "raid_level": "raid1", 00:14:56.636 "superblock": true, 00:14:56.636 "num_base_bdevs": 4, 00:14:56.636 "num_base_bdevs_discovered": 3, 00:14:56.636 "num_base_bdevs_operational": 3, 00:14:56.636 "process": { 00:14:56.636 "type": "rebuild", 00:14:56.636 "target": "spare", 00:14:56.636 "progress": { 00:14:56.636 "blocks": 51200, 00:14:56.636 "percent": 80 00:14:56.636 } 00:14:56.636 }, 00:14:56.636 "base_bdevs_list": [ 00:14:56.636 { 00:14:56.636 "name": "spare", 00:14:56.636 "uuid": "d696f6cb-3911-5ffd-9098-f32226a06ed4", 00:14:56.636 "is_configured": true, 00:14:56.636 "data_offset": 2048, 00:14:56.636 "data_size": 63488 00:14:56.636 }, 00:14:56.636 { 00:14:56.636 "name": null, 00:14:56.636 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:56.636 "is_configured": false, 00:14:56.636 "data_offset": 0, 00:14:56.636 "data_size": 63488 00:14:56.636 }, 00:14:56.636 { 00:14:56.636 "name": "BaseBdev3", 00:14:56.636 "uuid": "55dfba64-15a5-5834-9b99-5769a08de9f5", 00:14:56.636 "is_configured": true, 00:14:56.636 "data_offset": 2048, 00:14:56.636 "data_size": 63488 00:14:56.636 }, 00:14:56.636 { 00:14:56.636 "name": "BaseBdev4", 00:14:56.636 "uuid": "1f1c4392-f7d9-50ec-b500-69aa5d51fc49", 00:14:56.636 "is_configured": true, 00:14:56.636 "data_offset": 2048, 00:14:56.636 "data_size": 63488 00:14:56.636 } 00:14:56.636 ] 00:14:56.636 }' 00:14:56.636 19:13:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:56.636 19:13:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:56.636 19:13:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:56.636 19:13:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:56.636 19:13:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:56.896 [2024-11-27 19:13:06.459804] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:56.896 [2024-11-27 19:13:06.459869] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:56.896 [2024-11-27 19:13:06.459971] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:57.466 19:13:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:57.466 19:13:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:57.466 19:13:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:57.466 19:13:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:57.466 19:13:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:57.466 19:13:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:57.466 19:13:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:57.466 19:13:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:57.466 19:13:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.466 19:13:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.466 19:13:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.466 19:13:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:57.466 "name": "raid_bdev1", 00:14:57.466 "uuid": "8d22c699-fb09-4b30-8718-12f4c8c1920e", 00:14:57.466 "strip_size_kb": 0, 00:14:57.466 "state": "online", 00:14:57.466 "raid_level": "raid1", 00:14:57.466 "superblock": true, 00:14:57.466 "num_base_bdevs": 4, 00:14:57.466 "num_base_bdevs_discovered": 3, 00:14:57.466 "num_base_bdevs_operational": 3, 00:14:57.466 "base_bdevs_list": [ 00:14:57.466 { 00:14:57.466 "name": "spare", 00:14:57.466 "uuid": "d696f6cb-3911-5ffd-9098-f32226a06ed4", 00:14:57.466 "is_configured": true, 00:14:57.466 "data_offset": 2048, 00:14:57.466 "data_size": 63488 00:14:57.466 }, 00:14:57.466 { 00:14:57.466 "name": null, 00:14:57.466 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:57.466 "is_configured": false, 00:14:57.466 "data_offset": 0, 00:14:57.466 "data_size": 63488 00:14:57.466 }, 00:14:57.466 { 00:14:57.466 "name": "BaseBdev3", 00:14:57.466 "uuid": "55dfba64-15a5-5834-9b99-5769a08de9f5", 00:14:57.466 "is_configured": true, 00:14:57.466 "data_offset": 2048, 00:14:57.466 "data_size": 63488 00:14:57.466 }, 00:14:57.466 { 00:14:57.466 "name": "BaseBdev4", 00:14:57.466 "uuid": "1f1c4392-f7d9-50ec-b500-69aa5d51fc49", 00:14:57.466 "is_configured": true, 00:14:57.466 "data_offset": 2048, 00:14:57.466 "data_size": 63488 00:14:57.466 } 00:14:57.466 ] 00:14:57.466 }' 00:14:57.466 19:13:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:57.726 19:13:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:57.726 19:13:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:57.726 19:13:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:57.726 19:13:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:14:57.726 19:13:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:57.726 19:13:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:57.726 19:13:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:57.726 19:13:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:57.726 19:13:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:57.726 19:13:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:57.726 19:13:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:57.726 19:13:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.726 19:13:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.726 19:13:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.726 19:13:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:57.726 "name": "raid_bdev1", 00:14:57.726 "uuid": "8d22c699-fb09-4b30-8718-12f4c8c1920e", 00:14:57.726 "strip_size_kb": 0, 00:14:57.726 "state": "online", 00:14:57.726 "raid_level": "raid1", 00:14:57.726 "superblock": true, 00:14:57.726 "num_base_bdevs": 4, 00:14:57.726 "num_base_bdevs_discovered": 3, 00:14:57.726 "num_base_bdevs_operational": 3, 00:14:57.726 "base_bdevs_list": [ 00:14:57.726 { 00:14:57.726 "name": "spare", 00:14:57.726 "uuid": "d696f6cb-3911-5ffd-9098-f32226a06ed4", 00:14:57.726 "is_configured": true, 00:14:57.726 "data_offset": 2048, 00:14:57.726 "data_size": 63488 00:14:57.726 }, 00:14:57.726 { 00:14:57.726 "name": null, 00:14:57.726 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:57.726 "is_configured": false, 00:14:57.726 "data_offset": 0, 00:14:57.726 "data_size": 63488 00:14:57.726 }, 00:14:57.726 { 00:14:57.726 "name": "BaseBdev3", 00:14:57.726 "uuid": "55dfba64-15a5-5834-9b99-5769a08de9f5", 00:14:57.726 "is_configured": true, 00:14:57.726 "data_offset": 2048, 00:14:57.726 "data_size": 63488 00:14:57.726 }, 00:14:57.726 { 00:14:57.726 "name": "BaseBdev4", 00:14:57.726 "uuid": "1f1c4392-f7d9-50ec-b500-69aa5d51fc49", 00:14:57.726 "is_configured": true, 00:14:57.726 "data_offset": 2048, 00:14:57.726 "data_size": 63488 00:14:57.726 } 00:14:57.726 ] 00:14:57.726 }' 00:14:57.726 19:13:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:57.726 19:13:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:57.726 19:13:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:57.726 19:13:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:57.726 19:13:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:57.727 19:13:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:57.727 19:13:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:57.727 19:13:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:57.727 19:13:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:57.727 19:13:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:57.727 19:13:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:57.727 19:13:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:57.727 19:13:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:57.727 19:13:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:57.727 19:13:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:57.727 19:13:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:57.727 19:13:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.727 19:13:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.727 19:13:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.727 19:13:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:57.727 "name": "raid_bdev1", 00:14:57.727 "uuid": "8d22c699-fb09-4b30-8718-12f4c8c1920e", 00:14:57.727 "strip_size_kb": 0, 00:14:57.727 "state": "online", 00:14:57.727 "raid_level": "raid1", 00:14:57.727 "superblock": true, 00:14:57.727 "num_base_bdevs": 4, 00:14:57.727 "num_base_bdevs_discovered": 3, 00:14:57.727 "num_base_bdevs_operational": 3, 00:14:57.727 "base_bdevs_list": [ 00:14:57.727 { 00:14:57.727 "name": "spare", 00:14:57.727 "uuid": "d696f6cb-3911-5ffd-9098-f32226a06ed4", 00:14:57.727 "is_configured": true, 00:14:57.727 "data_offset": 2048, 00:14:57.727 "data_size": 63488 00:14:57.727 }, 00:14:57.727 { 00:14:57.727 "name": null, 00:14:57.727 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:57.727 "is_configured": false, 00:14:57.727 "data_offset": 0, 00:14:57.727 "data_size": 63488 00:14:57.727 }, 00:14:57.727 { 00:14:57.727 "name": "BaseBdev3", 00:14:57.727 "uuid": "55dfba64-15a5-5834-9b99-5769a08de9f5", 00:14:57.727 "is_configured": true, 00:14:57.727 "data_offset": 2048, 00:14:57.727 "data_size": 63488 00:14:57.727 }, 00:14:57.727 { 00:14:57.727 "name": "BaseBdev4", 00:14:57.727 "uuid": "1f1c4392-f7d9-50ec-b500-69aa5d51fc49", 00:14:57.727 "is_configured": true, 00:14:57.727 "data_offset": 2048, 00:14:57.727 "data_size": 63488 00:14:57.727 } 00:14:57.727 ] 00:14:57.727 }' 00:14:57.727 19:13:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:57.727 19:13:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.297 19:13:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:58.297 19:13:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.297 19:13:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.297 [2024-11-27 19:13:07.710700] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:58.297 [2024-11-27 19:13:07.710810] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:58.297 [2024-11-27 19:13:07.710935] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:58.297 [2024-11-27 19:13:07.711040] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:58.297 [2024-11-27 19:13:07.711088] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:58.297 19:13:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.297 19:13:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:58.297 19:13:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.297 19:13:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.297 19:13:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:14:58.297 19:13:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.297 19:13:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:58.297 19:13:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:58.297 19:13:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:14:58.298 19:13:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:14:58.298 19:13:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:58.298 19:13:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:14:58.298 19:13:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:58.298 19:13:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:58.298 19:13:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:58.298 19:13:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:14:58.298 19:13:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:58.298 19:13:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:58.298 19:13:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:14:58.557 /dev/nbd0 00:14:58.557 19:13:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:58.557 19:13:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:58.558 19:13:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:58.558 19:13:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:14:58.558 19:13:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:58.558 19:13:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:58.558 19:13:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:58.558 19:13:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:14:58.558 19:13:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:58.558 19:13:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:58.558 19:13:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:58.558 1+0 records in 00:14:58.558 1+0 records out 00:14:58.558 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000499642 s, 8.2 MB/s 00:14:58.558 19:13:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:58.558 19:13:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:14:58.558 19:13:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:58.558 19:13:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:58.558 19:13:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:14:58.558 19:13:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:58.558 19:13:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:58.558 19:13:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:14:58.818 /dev/nbd1 00:14:58.818 19:13:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:58.818 19:13:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:58.818 19:13:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:58.818 19:13:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:14:58.818 19:13:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:58.818 19:13:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:58.818 19:13:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:58.818 19:13:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:14:58.818 19:13:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:58.818 19:13:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:58.818 19:13:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:58.818 1+0 records in 00:14:58.818 1+0 records out 00:14:58.818 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000546676 s, 7.5 MB/s 00:14:58.818 19:13:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:58.818 19:13:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:14:58.818 19:13:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:58.818 19:13:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:58.818 19:13:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:14:58.818 19:13:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:58.818 19:13:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:58.818 19:13:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:14:58.818 19:13:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:14:58.818 19:13:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:58.818 19:13:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:58.818 19:13:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:58.818 19:13:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:14:58.818 19:13:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:58.818 19:13:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:59.079 19:13:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:59.079 19:13:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:59.079 19:13:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:59.079 19:13:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:59.079 19:13:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:59.079 19:13:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:59.079 19:13:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:59.079 19:13:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:59.079 19:13:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:59.079 19:13:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:59.339 19:13:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:59.339 19:13:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:59.339 19:13:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:59.339 19:13:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:59.339 19:13:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:59.339 19:13:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:59.339 19:13:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:59.339 19:13:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:59.339 19:13:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:14:59.339 19:13:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:14:59.339 19:13:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.339 19:13:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.339 19:13:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.339 19:13:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:59.339 19:13:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.339 19:13:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.339 [2024-11-27 19:13:08.880845] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:59.339 [2024-11-27 19:13:08.880905] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:59.339 [2024-11-27 19:13:08.880930] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:14:59.339 [2024-11-27 19:13:08.880940] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:59.339 [2024-11-27 19:13:08.883201] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:59.339 [2024-11-27 19:13:08.883324] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:59.339 [2024-11-27 19:13:08.883431] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:59.339 [2024-11-27 19:13:08.883507] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:59.339 [2024-11-27 19:13:08.883661] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:59.339 [2024-11-27 19:13:08.883775] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:59.339 spare 00:14:59.339 19:13:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.339 19:13:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:14:59.339 19:13:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.339 19:13:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.599 [2024-11-27 19:13:08.983669] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:14:59.599 [2024-11-27 19:13:08.983704] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:59.599 [2024-11-27 19:13:08.983955] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:14:59.599 [2024-11-27 19:13:08.984121] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:14:59.599 [2024-11-27 19:13:08.984142] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:14:59.599 [2024-11-27 19:13:08.984308] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:59.599 19:13:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.599 19:13:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:59.599 19:13:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:59.599 19:13:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:59.599 19:13:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:59.599 19:13:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:59.599 19:13:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:59.599 19:13:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:59.599 19:13:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:59.599 19:13:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:59.599 19:13:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:59.599 19:13:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:59.599 19:13:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:59.599 19:13:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.599 19:13:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.599 19:13:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.599 19:13:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:59.599 "name": "raid_bdev1", 00:14:59.599 "uuid": "8d22c699-fb09-4b30-8718-12f4c8c1920e", 00:14:59.599 "strip_size_kb": 0, 00:14:59.599 "state": "online", 00:14:59.599 "raid_level": "raid1", 00:14:59.599 "superblock": true, 00:14:59.599 "num_base_bdevs": 4, 00:14:59.599 "num_base_bdevs_discovered": 3, 00:14:59.599 "num_base_bdevs_operational": 3, 00:14:59.600 "base_bdevs_list": [ 00:14:59.600 { 00:14:59.600 "name": "spare", 00:14:59.600 "uuid": "d696f6cb-3911-5ffd-9098-f32226a06ed4", 00:14:59.600 "is_configured": true, 00:14:59.600 "data_offset": 2048, 00:14:59.600 "data_size": 63488 00:14:59.600 }, 00:14:59.600 { 00:14:59.600 "name": null, 00:14:59.600 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:59.600 "is_configured": false, 00:14:59.600 "data_offset": 2048, 00:14:59.600 "data_size": 63488 00:14:59.600 }, 00:14:59.600 { 00:14:59.600 "name": "BaseBdev3", 00:14:59.600 "uuid": "55dfba64-15a5-5834-9b99-5769a08de9f5", 00:14:59.600 "is_configured": true, 00:14:59.600 "data_offset": 2048, 00:14:59.600 "data_size": 63488 00:14:59.600 }, 00:14:59.600 { 00:14:59.600 "name": "BaseBdev4", 00:14:59.600 "uuid": "1f1c4392-f7d9-50ec-b500-69aa5d51fc49", 00:14:59.600 "is_configured": true, 00:14:59.600 "data_offset": 2048, 00:14:59.600 "data_size": 63488 00:14:59.600 } 00:14:59.600 ] 00:14:59.600 }' 00:14:59.600 19:13:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:59.600 19:13:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.858 19:13:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:59.858 19:13:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:59.858 19:13:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:59.858 19:13:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:59.858 19:13:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:59.858 19:13:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:59.858 19:13:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:59.858 19:13:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.858 19:13:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.858 19:13:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.858 19:13:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:59.858 "name": "raid_bdev1", 00:14:59.858 "uuid": "8d22c699-fb09-4b30-8718-12f4c8c1920e", 00:14:59.859 "strip_size_kb": 0, 00:14:59.859 "state": "online", 00:14:59.859 "raid_level": "raid1", 00:14:59.859 "superblock": true, 00:14:59.859 "num_base_bdevs": 4, 00:14:59.859 "num_base_bdevs_discovered": 3, 00:14:59.859 "num_base_bdevs_operational": 3, 00:14:59.859 "base_bdevs_list": [ 00:14:59.859 { 00:14:59.859 "name": "spare", 00:14:59.859 "uuid": "d696f6cb-3911-5ffd-9098-f32226a06ed4", 00:14:59.859 "is_configured": true, 00:14:59.859 "data_offset": 2048, 00:14:59.859 "data_size": 63488 00:14:59.859 }, 00:14:59.859 { 00:14:59.859 "name": null, 00:14:59.859 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:59.859 "is_configured": false, 00:14:59.859 "data_offset": 2048, 00:14:59.859 "data_size": 63488 00:14:59.859 }, 00:14:59.859 { 00:14:59.859 "name": "BaseBdev3", 00:14:59.859 "uuid": "55dfba64-15a5-5834-9b99-5769a08de9f5", 00:14:59.859 "is_configured": true, 00:14:59.859 "data_offset": 2048, 00:14:59.859 "data_size": 63488 00:14:59.859 }, 00:14:59.859 { 00:14:59.859 "name": "BaseBdev4", 00:14:59.859 "uuid": "1f1c4392-f7d9-50ec-b500-69aa5d51fc49", 00:14:59.859 "is_configured": true, 00:14:59.859 "data_offset": 2048, 00:14:59.859 "data_size": 63488 00:14:59.859 } 00:14:59.859 ] 00:14:59.859 }' 00:14:59.859 19:13:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:00.116 19:13:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:00.116 19:13:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:00.116 19:13:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:00.116 19:13:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.116 19:13:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.116 19:13:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.116 19:13:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:15:00.116 19:13:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.116 19:13:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:15:00.116 19:13:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:00.116 19:13:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.116 19:13:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.116 [2024-11-27 19:13:09.575722] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:00.116 19:13:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.116 19:13:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:00.116 19:13:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:00.116 19:13:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:00.116 19:13:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:00.116 19:13:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:00.116 19:13:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:00.116 19:13:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:00.116 19:13:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:00.116 19:13:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:00.116 19:13:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:00.116 19:13:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:00.116 19:13:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.116 19:13:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.117 19:13:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.117 19:13:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.117 19:13:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:00.117 "name": "raid_bdev1", 00:15:00.117 "uuid": "8d22c699-fb09-4b30-8718-12f4c8c1920e", 00:15:00.117 "strip_size_kb": 0, 00:15:00.117 "state": "online", 00:15:00.117 "raid_level": "raid1", 00:15:00.117 "superblock": true, 00:15:00.117 "num_base_bdevs": 4, 00:15:00.117 "num_base_bdevs_discovered": 2, 00:15:00.117 "num_base_bdevs_operational": 2, 00:15:00.117 "base_bdevs_list": [ 00:15:00.117 { 00:15:00.117 "name": null, 00:15:00.117 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:00.117 "is_configured": false, 00:15:00.117 "data_offset": 0, 00:15:00.117 "data_size": 63488 00:15:00.117 }, 00:15:00.117 { 00:15:00.117 "name": null, 00:15:00.117 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:00.117 "is_configured": false, 00:15:00.117 "data_offset": 2048, 00:15:00.117 "data_size": 63488 00:15:00.117 }, 00:15:00.117 { 00:15:00.117 "name": "BaseBdev3", 00:15:00.117 "uuid": "55dfba64-15a5-5834-9b99-5769a08de9f5", 00:15:00.117 "is_configured": true, 00:15:00.117 "data_offset": 2048, 00:15:00.117 "data_size": 63488 00:15:00.117 }, 00:15:00.117 { 00:15:00.117 "name": "BaseBdev4", 00:15:00.117 "uuid": "1f1c4392-f7d9-50ec-b500-69aa5d51fc49", 00:15:00.117 "is_configured": true, 00:15:00.117 "data_offset": 2048, 00:15:00.117 "data_size": 63488 00:15:00.117 } 00:15:00.117 ] 00:15:00.117 }' 00:15:00.117 19:13:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:00.117 19:13:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.684 19:13:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:00.684 19:13:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.684 19:13:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.684 [2024-11-27 19:13:10.062971] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:00.684 [2024-11-27 19:13:10.063147] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:15:00.684 [2024-11-27 19:13:10.063161] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:00.684 [2024-11-27 19:13:10.063195] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:00.684 [2024-11-27 19:13:10.075747] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1d50 00:15:00.684 19:13:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.684 19:13:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:15:00.684 [2024-11-27 19:13:10.077663] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:01.622 19:13:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:01.622 19:13:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:01.622 19:13:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:01.622 19:13:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:01.622 19:13:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:01.622 19:13:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:01.622 19:13:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.622 19:13:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:01.622 19:13:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.622 19:13:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.622 19:13:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:01.622 "name": "raid_bdev1", 00:15:01.622 "uuid": "8d22c699-fb09-4b30-8718-12f4c8c1920e", 00:15:01.622 "strip_size_kb": 0, 00:15:01.622 "state": "online", 00:15:01.622 "raid_level": "raid1", 00:15:01.622 "superblock": true, 00:15:01.622 "num_base_bdevs": 4, 00:15:01.622 "num_base_bdevs_discovered": 3, 00:15:01.622 "num_base_bdevs_operational": 3, 00:15:01.622 "process": { 00:15:01.622 "type": "rebuild", 00:15:01.622 "target": "spare", 00:15:01.622 "progress": { 00:15:01.622 "blocks": 20480, 00:15:01.622 "percent": 32 00:15:01.622 } 00:15:01.622 }, 00:15:01.622 "base_bdevs_list": [ 00:15:01.622 { 00:15:01.622 "name": "spare", 00:15:01.622 "uuid": "d696f6cb-3911-5ffd-9098-f32226a06ed4", 00:15:01.622 "is_configured": true, 00:15:01.622 "data_offset": 2048, 00:15:01.622 "data_size": 63488 00:15:01.622 }, 00:15:01.622 { 00:15:01.622 "name": null, 00:15:01.622 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:01.622 "is_configured": false, 00:15:01.622 "data_offset": 2048, 00:15:01.622 "data_size": 63488 00:15:01.622 }, 00:15:01.622 { 00:15:01.622 "name": "BaseBdev3", 00:15:01.622 "uuid": "55dfba64-15a5-5834-9b99-5769a08de9f5", 00:15:01.622 "is_configured": true, 00:15:01.622 "data_offset": 2048, 00:15:01.622 "data_size": 63488 00:15:01.622 }, 00:15:01.622 { 00:15:01.622 "name": "BaseBdev4", 00:15:01.622 "uuid": "1f1c4392-f7d9-50ec-b500-69aa5d51fc49", 00:15:01.622 "is_configured": true, 00:15:01.622 "data_offset": 2048, 00:15:01.622 "data_size": 63488 00:15:01.622 } 00:15:01.622 ] 00:15:01.622 }' 00:15:01.622 19:13:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:01.622 19:13:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:01.622 19:13:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:01.622 19:13:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:01.622 19:13:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:15:01.622 19:13:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.622 19:13:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.622 [2024-11-27 19:13:11.236858] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:01.883 [2024-11-27 19:13:11.282266] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:01.883 [2024-11-27 19:13:11.282368] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:01.883 [2024-11-27 19:13:11.282406] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:01.883 [2024-11-27 19:13:11.282425] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:01.883 19:13:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.883 19:13:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:01.883 19:13:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:01.883 19:13:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:01.883 19:13:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:01.883 19:13:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:01.883 19:13:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:01.883 19:13:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:01.883 19:13:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:01.883 19:13:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:01.883 19:13:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:01.883 19:13:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:01.883 19:13:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:01.883 19:13:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.883 19:13:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.883 19:13:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.883 19:13:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:01.883 "name": "raid_bdev1", 00:15:01.883 "uuid": "8d22c699-fb09-4b30-8718-12f4c8c1920e", 00:15:01.883 "strip_size_kb": 0, 00:15:01.883 "state": "online", 00:15:01.883 "raid_level": "raid1", 00:15:01.883 "superblock": true, 00:15:01.883 "num_base_bdevs": 4, 00:15:01.883 "num_base_bdevs_discovered": 2, 00:15:01.883 "num_base_bdevs_operational": 2, 00:15:01.883 "base_bdevs_list": [ 00:15:01.883 { 00:15:01.883 "name": null, 00:15:01.883 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:01.883 "is_configured": false, 00:15:01.883 "data_offset": 0, 00:15:01.883 "data_size": 63488 00:15:01.883 }, 00:15:01.883 { 00:15:01.883 "name": null, 00:15:01.883 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:01.883 "is_configured": false, 00:15:01.883 "data_offset": 2048, 00:15:01.883 "data_size": 63488 00:15:01.883 }, 00:15:01.883 { 00:15:01.883 "name": "BaseBdev3", 00:15:01.883 "uuid": "55dfba64-15a5-5834-9b99-5769a08de9f5", 00:15:01.883 "is_configured": true, 00:15:01.883 "data_offset": 2048, 00:15:01.883 "data_size": 63488 00:15:01.883 }, 00:15:01.883 { 00:15:01.883 "name": "BaseBdev4", 00:15:01.883 "uuid": "1f1c4392-f7d9-50ec-b500-69aa5d51fc49", 00:15:01.883 "is_configured": true, 00:15:01.883 "data_offset": 2048, 00:15:01.883 "data_size": 63488 00:15:01.883 } 00:15:01.883 ] 00:15:01.883 }' 00:15:01.883 19:13:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:01.883 19:13:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.143 19:13:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:02.143 19:13:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.143 19:13:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.143 [2024-11-27 19:13:11.749821] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:02.143 [2024-11-27 19:13:11.749938] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:02.143 [2024-11-27 19:13:11.749994] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:15:02.143 [2024-11-27 19:13:11.750030] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:02.143 [2024-11-27 19:13:11.750497] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:02.143 [2024-11-27 19:13:11.750517] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:02.143 [2024-11-27 19:13:11.750602] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:02.143 [2024-11-27 19:13:11.750614] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:15:02.143 [2024-11-27 19:13:11.750631] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:02.143 [2024-11-27 19:13:11.750649] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:02.143 [2024-11-27 19:13:11.764211] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1e20 00:15:02.143 spare 00:15:02.143 19:13:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.143 19:13:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:15:02.143 [2024-11-27 19:13:11.765991] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:03.527 19:13:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:03.527 19:13:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:03.527 19:13:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:03.527 19:13:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:03.527 19:13:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:03.527 19:13:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:03.527 19:13:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:03.527 19:13:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.527 19:13:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.527 19:13:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.527 19:13:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:03.527 "name": "raid_bdev1", 00:15:03.527 "uuid": "8d22c699-fb09-4b30-8718-12f4c8c1920e", 00:15:03.527 "strip_size_kb": 0, 00:15:03.527 "state": "online", 00:15:03.527 "raid_level": "raid1", 00:15:03.527 "superblock": true, 00:15:03.527 "num_base_bdevs": 4, 00:15:03.527 "num_base_bdevs_discovered": 3, 00:15:03.527 "num_base_bdevs_operational": 3, 00:15:03.527 "process": { 00:15:03.527 "type": "rebuild", 00:15:03.527 "target": "spare", 00:15:03.527 "progress": { 00:15:03.527 "blocks": 20480, 00:15:03.527 "percent": 32 00:15:03.527 } 00:15:03.527 }, 00:15:03.527 "base_bdevs_list": [ 00:15:03.527 { 00:15:03.527 "name": "spare", 00:15:03.527 "uuid": "d696f6cb-3911-5ffd-9098-f32226a06ed4", 00:15:03.527 "is_configured": true, 00:15:03.527 "data_offset": 2048, 00:15:03.527 "data_size": 63488 00:15:03.527 }, 00:15:03.527 { 00:15:03.527 "name": null, 00:15:03.527 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:03.527 "is_configured": false, 00:15:03.527 "data_offset": 2048, 00:15:03.527 "data_size": 63488 00:15:03.527 }, 00:15:03.527 { 00:15:03.527 "name": "BaseBdev3", 00:15:03.527 "uuid": "55dfba64-15a5-5834-9b99-5769a08de9f5", 00:15:03.527 "is_configured": true, 00:15:03.527 "data_offset": 2048, 00:15:03.527 "data_size": 63488 00:15:03.527 }, 00:15:03.527 { 00:15:03.527 "name": "BaseBdev4", 00:15:03.527 "uuid": "1f1c4392-f7d9-50ec-b500-69aa5d51fc49", 00:15:03.527 "is_configured": true, 00:15:03.527 "data_offset": 2048, 00:15:03.527 "data_size": 63488 00:15:03.527 } 00:15:03.527 ] 00:15:03.527 }' 00:15:03.527 19:13:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:03.527 19:13:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:03.527 19:13:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:03.527 19:13:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:03.527 19:13:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:15:03.527 19:13:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.527 19:13:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.527 [2024-11-27 19:13:12.905669] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:03.527 [2024-11-27 19:13:12.970546] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:03.527 [2024-11-27 19:13:12.970602] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:03.527 [2024-11-27 19:13:12.970616] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:03.527 [2024-11-27 19:13:12.970624] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:03.527 19:13:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.527 19:13:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:03.527 19:13:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:03.527 19:13:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:03.527 19:13:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:03.527 19:13:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:03.527 19:13:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:03.527 19:13:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:03.527 19:13:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:03.527 19:13:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:03.527 19:13:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:03.527 19:13:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:03.527 19:13:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:03.527 19:13:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.527 19:13:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.527 19:13:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.527 19:13:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:03.527 "name": "raid_bdev1", 00:15:03.527 "uuid": "8d22c699-fb09-4b30-8718-12f4c8c1920e", 00:15:03.527 "strip_size_kb": 0, 00:15:03.527 "state": "online", 00:15:03.527 "raid_level": "raid1", 00:15:03.527 "superblock": true, 00:15:03.527 "num_base_bdevs": 4, 00:15:03.527 "num_base_bdevs_discovered": 2, 00:15:03.527 "num_base_bdevs_operational": 2, 00:15:03.527 "base_bdevs_list": [ 00:15:03.527 { 00:15:03.527 "name": null, 00:15:03.527 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:03.527 "is_configured": false, 00:15:03.527 "data_offset": 0, 00:15:03.527 "data_size": 63488 00:15:03.527 }, 00:15:03.527 { 00:15:03.527 "name": null, 00:15:03.527 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:03.527 "is_configured": false, 00:15:03.527 "data_offset": 2048, 00:15:03.527 "data_size": 63488 00:15:03.527 }, 00:15:03.527 { 00:15:03.527 "name": "BaseBdev3", 00:15:03.527 "uuid": "55dfba64-15a5-5834-9b99-5769a08de9f5", 00:15:03.527 "is_configured": true, 00:15:03.527 "data_offset": 2048, 00:15:03.527 "data_size": 63488 00:15:03.527 }, 00:15:03.527 { 00:15:03.527 "name": "BaseBdev4", 00:15:03.527 "uuid": "1f1c4392-f7d9-50ec-b500-69aa5d51fc49", 00:15:03.527 "is_configured": true, 00:15:03.527 "data_offset": 2048, 00:15:03.527 "data_size": 63488 00:15:03.527 } 00:15:03.527 ] 00:15:03.527 }' 00:15:03.527 19:13:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:03.527 19:13:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.098 19:13:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:04.098 19:13:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:04.098 19:13:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:04.098 19:13:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:04.098 19:13:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:04.098 19:13:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:04.098 19:13:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.098 19:13:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:04.098 19:13:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.098 19:13:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.098 19:13:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:04.098 "name": "raid_bdev1", 00:15:04.098 "uuid": "8d22c699-fb09-4b30-8718-12f4c8c1920e", 00:15:04.098 "strip_size_kb": 0, 00:15:04.098 "state": "online", 00:15:04.098 "raid_level": "raid1", 00:15:04.098 "superblock": true, 00:15:04.098 "num_base_bdevs": 4, 00:15:04.098 "num_base_bdevs_discovered": 2, 00:15:04.098 "num_base_bdevs_operational": 2, 00:15:04.098 "base_bdevs_list": [ 00:15:04.098 { 00:15:04.098 "name": null, 00:15:04.098 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:04.098 "is_configured": false, 00:15:04.098 "data_offset": 0, 00:15:04.098 "data_size": 63488 00:15:04.098 }, 00:15:04.098 { 00:15:04.098 "name": null, 00:15:04.098 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:04.098 "is_configured": false, 00:15:04.098 "data_offset": 2048, 00:15:04.098 "data_size": 63488 00:15:04.098 }, 00:15:04.098 { 00:15:04.098 "name": "BaseBdev3", 00:15:04.098 "uuid": "55dfba64-15a5-5834-9b99-5769a08de9f5", 00:15:04.098 "is_configured": true, 00:15:04.098 "data_offset": 2048, 00:15:04.098 "data_size": 63488 00:15:04.098 }, 00:15:04.098 { 00:15:04.098 "name": "BaseBdev4", 00:15:04.098 "uuid": "1f1c4392-f7d9-50ec-b500-69aa5d51fc49", 00:15:04.098 "is_configured": true, 00:15:04.098 "data_offset": 2048, 00:15:04.098 "data_size": 63488 00:15:04.098 } 00:15:04.098 ] 00:15:04.098 }' 00:15:04.098 19:13:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:04.098 19:13:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:04.098 19:13:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:04.098 19:13:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:04.098 19:13:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:15:04.098 19:13:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.098 19:13:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.098 19:13:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.098 19:13:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:04.098 19:13:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.098 19:13:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.098 [2024-11-27 19:13:13.589944] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:04.098 [2024-11-27 19:13:13.590002] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:04.098 [2024-11-27 19:13:13.590021] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:15:04.098 [2024-11-27 19:13:13.590033] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:04.098 [2024-11-27 19:13:13.590468] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:04.098 [2024-11-27 19:13:13.590489] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:04.098 [2024-11-27 19:13:13.590562] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:15:04.098 [2024-11-27 19:13:13.590578] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:15:04.098 [2024-11-27 19:13:13.590586] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:04.098 [2024-11-27 19:13:13.590610] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:15:04.098 BaseBdev1 00:15:04.098 19:13:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.098 19:13:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:15:05.039 19:13:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:05.039 19:13:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:05.039 19:13:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:05.039 19:13:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:05.039 19:13:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:05.039 19:13:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:05.039 19:13:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:05.039 19:13:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:05.039 19:13:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:05.039 19:13:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:05.039 19:13:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:05.039 19:13:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:05.039 19:13:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.039 19:13:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.039 19:13:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.039 19:13:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:05.039 "name": "raid_bdev1", 00:15:05.039 "uuid": "8d22c699-fb09-4b30-8718-12f4c8c1920e", 00:15:05.039 "strip_size_kb": 0, 00:15:05.039 "state": "online", 00:15:05.039 "raid_level": "raid1", 00:15:05.039 "superblock": true, 00:15:05.039 "num_base_bdevs": 4, 00:15:05.039 "num_base_bdevs_discovered": 2, 00:15:05.039 "num_base_bdevs_operational": 2, 00:15:05.039 "base_bdevs_list": [ 00:15:05.039 { 00:15:05.039 "name": null, 00:15:05.039 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:05.039 "is_configured": false, 00:15:05.039 "data_offset": 0, 00:15:05.039 "data_size": 63488 00:15:05.039 }, 00:15:05.039 { 00:15:05.039 "name": null, 00:15:05.039 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:05.039 "is_configured": false, 00:15:05.039 "data_offset": 2048, 00:15:05.039 "data_size": 63488 00:15:05.039 }, 00:15:05.039 { 00:15:05.039 "name": "BaseBdev3", 00:15:05.039 "uuid": "55dfba64-15a5-5834-9b99-5769a08de9f5", 00:15:05.039 "is_configured": true, 00:15:05.039 "data_offset": 2048, 00:15:05.039 "data_size": 63488 00:15:05.039 }, 00:15:05.039 { 00:15:05.039 "name": "BaseBdev4", 00:15:05.039 "uuid": "1f1c4392-f7d9-50ec-b500-69aa5d51fc49", 00:15:05.039 "is_configured": true, 00:15:05.039 "data_offset": 2048, 00:15:05.039 "data_size": 63488 00:15:05.039 } 00:15:05.039 ] 00:15:05.039 }' 00:15:05.039 19:13:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:05.039 19:13:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.610 19:13:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:05.610 19:13:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:05.610 19:13:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:05.610 19:13:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:05.610 19:13:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:05.610 19:13:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:05.610 19:13:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:05.610 19:13:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.610 19:13:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.610 19:13:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.610 19:13:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:05.610 "name": "raid_bdev1", 00:15:05.610 "uuid": "8d22c699-fb09-4b30-8718-12f4c8c1920e", 00:15:05.610 "strip_size_kb": 0, 00:15:05.610 "state": "online", 00:15:05.610 "raid_level": "raid1", 00:15:05.610 "superblock": true, 00:15:05.610 "num_base_bdevs": 4, 00:15:05.610 "num_base_bdevs_discovered": 2, 00:15:05.610 "num_base_bdevs_operational": 2, 00:15:05.610 "base_bdevs_list": [ 00:15:05.610 { 00:15:05.610 "name": null, 00:15:05.610 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:05.610 "is_configured": false, 00:15:05.610 "data_offset": 0, 00:15:05.610 "data_size": 63488 00:15:05.610 }, 00:15:05.610 { 00:15:05.610 "name": null, 00:15:05.610 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:05.610 "is_configured": false, 00:15:05.610 "data_offset": 2048, 00:15:05.610 "data_size": 63488 00:15:05.610 }, 00:15:05.610 { 00:15:05.610 "name": "BaseBdev3", 00:15:05.610 "uuid": "55dfba64-15a5-5834-9b99-5769a08de9f5", 00:15:05.610 "is_configured": true, 00:15:05.610 "data_offset": 2048, 00:15:05.610 "data_size": 63488 00:15:05.610 }, 00:15:05.610 { 00:15:05.610 "name": "BaseBdev4", 00:15:05.610 "uuid": "1f1c4392-f7d9-50ec-b500-69aa5d51fc49", 00:15:05.610 "is_configured": true, 00:15:05.610 "data_offset": 2048, 00:15:05.610 "data_size": 63488 00:15:05.610 } 00:15:05.610 ] 00:15:05.610 }' 00:15:05.610 19:13:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:05.610 19:13:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:05.610 19:13:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:05.610 19:13:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:05.610 19:13:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:05.610 19:13:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:15:05.610 19:13:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:05.610 19:13:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:15:05.610 19:13:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:05.610 19:13:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:15:05.610 19:13:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:05.610 19:13:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:05.610 19:13:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.610 19:13:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.610 [2024-11-27 19:13:15.231245] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:05.610 [2024-11-27 19:13:15.231438] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:15:05.610 [2024-11-27 19:13:15.231452] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:05.610 request: 00:15:05.610 { 00:15:05.610 "base_bdev": "BaseBdev1", 00:15:05.610 "raid_bdev": "raid_bdev1", 00:15:05.610 "method": "bdev_raid_add_base_bdev", 00:15:05.610 "req_id": 1 00:15:05.610 } 00:15:05.610 Got JSON-RPC error response 00:15:05.610 response: 00:15:05.610 { 00:15:05.610 "code": -22, 00:15:05.610 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:15:05.610 } 00:15:05.610 19:13:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:15:05.610 19:13:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:15:05.610 19:13:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:05.610 19:13:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:05.610 19:13:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:05.610 19:13:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:15:06.993 19:13:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:06.993 19:13:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:06.993 19:13:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:06.993 19:13:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:06.993 19:13:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:06.993 19:13:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:06.993 19:13:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:06.993 19:13:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:06.993 19:13:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:06.993 19:13:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:06.993 19:13:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:06.993 19:13:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:06.993 19:13:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.993 19:13:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:06.993 19:13:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.993 19:13:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:06.993 "name": "raid_bdev1", 00:15:06.993 "uuid": "8d22c699-fb09-4b30-8718-12f4c8c1920e", 00:15:06.993 "strip_size_kb": 0, 00:15:06.993 "state": "online", 00:15:06.993 "raid_level": "raid1", 00:15:06.993 "superblock": true, 00:15:06.993 "num_base_bdevs": 4, 00:15:06.993 "num_base_bdevs_discovered": 2, 00:15:06.993 "num_base_bdevs_operational": 2, 00:15:06.993 "base_bdevs_list": [ 00:15:06.993 { 00:15:06.993 "name": null, 00:15:06.993 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:06.993 "is_configured": false, 00:15:06.993 "data_offset": 0, 00:15:06.993 "data_size": 63488 00:15:06.993 }, 00:15:06.993 { 00:15:06.993 "name": null, 00:15:06.993 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:06.993 "is_configured": false, 00:15:06.993 "data_offset": 2048, 00:15:06.993 "data_size": 63488 00:15:06.993 }, 00:15:06.993 { 00:15:06.993 "name": "BaseBdev3", 00:15:06.993 "uuid": "55dfba64-15a5-5834-9b99-5769a08de9f5", 00:15:06.993 "is_configured": true, 00:15:06.993 "data_offset": 2048, 00:15:06.993 "data_size": 63488 00:15:06.993 }, 00:15:06.993 { 00:15:06.993 "name": "BaseBdev4", 00:15:06.993 "uuid": "1f1c4392-f7d9-50ec-b500-69aa5d51fc49", 00:15:06.993 "is_configured": true, 00:15:06.993 "data_offset": 2048, 00:15:06.993 "data_size": 63488 00:15:06.993 } 00:15:06.993 ] 00:15:06.993 }' 00:15:06.993 19:13:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:06.993 19:13:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:07.253 19:13:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:07.253 19:13:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:07.253 19:13:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:07.253 19:13:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:07.253 19:13:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:07.253 19:13:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:07.253 19:13:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.253 19:13:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:07.253 19:13:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:07.253 19:13:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.253 19:13:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:07.253 "name": "raid_bdev1", 00:15:07.253 "uuid": "8d22c699-fb09-4b30-8718-12f4c8c1920e", 00:15:07.253 "strip_size_kb": 0, 00:15:07.253 "state": "online", 00:15:07.253 "raid_level": "raid1", 00:15:07.253 "superblock": true, 00:15:07.253 "num_base_bdevs": 4, 00:15:07.253 "num_base_bdevs_discovered": 2, 00:15:07.253 "num_base_bdevs_operational": 2, 00:15:07.253 "base_bdevs_list": [ 00:15:07.253 { 00:15:07.253 "name": null, 00:15:07.253 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:07.253 "is_configured": false, 00:15:07.253 "data_offset": 0, 00:15:07.253 "data_size": 63488 00:15:07.253 }, 00:15:07.253 { 00:15:07.253 "name": null, 00:15:07.253 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:07.253 "is_configured": false, 00:15:07.253 "data_offset": 2048, 00:15:07.253 "data_size": 63488 00:15:07.253 }, 00:15:07.253 { 00:15:07.253 "name": "BaseBdev3", 00:15:07.253 "uuid": "55dfba64-15a5-5834-9b99-5769a08de9f5", 00:15:07.253 "is_configured": true, 00:15:07.253 "data_offset": 2048, 00:15:07.253 "data_size": 63488 00:15:07.253 }, 00:15:07.253 { 00:15:07.253 "name": "BaseBdev4", 00:15:07.253 "uuid": "1f1c4392-f7d9-50ec-b500-69aa5d51fc49", 00:15:07.253 "is_configured": true, 00:15:07.253 "data_offset": 2048, 00:15:07.253 "data_size": 63488 00:15:07.253 } 00:15:07.253 ] 00:15:07.253 }' 00:15:07.253 19:13:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:07.253 19:13:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:07.253 19:13:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:07.253 19:13:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:07.253 19:13:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 78076 00:15:07.253 19:13:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 78076 ']' 00:15:07.253 19:13:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 78076 00:15:07.253 19:13:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:15:07.253 19:13:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:07.253 19:13:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78076 00:15:07.513 19:13:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:07.513 19:13:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:07.513 killing process with pid 78076 00:15:07.513 Received shutdown signal, test time was about 60.000000 seconds 00:15:07.513 00:15:07.513 Latency(us) 00:15:07.513 [2024-11-27T19:13:17.149Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:07.513 [2024-11-27T19:13:17.149Z] =================================================================================================================== 00:15:07.513 [2024-11-27T19:13:17.149Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:07.513 19:13:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78076' 00:15:07.513 19:13:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 78076 00:15:07.513 [2024-11-27 19:13:16.897273] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:07.513 [2024-11-27 19:13:16.897392] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:07.513 19:13:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 78076 00:15:07.513 [2024-11-27 19:13:16.897458] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:07.513 [2024-11-27 19:13:16.897469] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:15:07.773 [2024-11-27 19:13:17.355640] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:09.155 19:13:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:15:09.155 00:15:09.155 real 0m25.131s 00:15:09.155 user 0m29.902s 00:15:09.155 sys 0m4.155s 00:15:09.155 ************************************ 00:15:09.155 END TEST raid_rebuild_test_sb 00:15:09.155 ************************************ 00:15:09.155 19:13:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:09.155 19:13:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:09.155 19:13:18 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true true 00:15:09.155 19:13:18 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:15:09.155 19:13:18 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:09.155 19:13:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:09.155 ************************************ 00:15:09.155 START TEST raid_rebuild_test_io 00:15:09.155 ************************************ 00:15:09.155 19:13:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 false true true 00:15:09.155 19:13:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:15:09.155 19:13:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:15:09.155 19:13:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:15:09.155 19:13:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:15:09.155 19:13:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:09.155 19:13:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:09.155 19:13:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:09.155 19:13:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:09.155 19:13:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:09.155 19:13:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:09.155 19:13:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:09.155 19:13:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:09.155 19:13:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:09.155 19:13:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:15:09.155 19:13:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:09.155 19:13:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:09.155 19:13:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:15:09.155 19:13:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:09.155 19:13:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:09.155 19:13:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:09.155 19:13:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:09.155 19:13:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:09.155 19:13:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:09.155 19:13:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:09.155 19:13:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:09.155 19:13:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:09.155 19:13:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:15:09.155 19:13:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:15:09.155 19:13:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:15:09.155 19:13:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=78829 00:15:09.155 19:13:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:09.155 19:13:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 78829 00:15:09.155 19:13:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # '[' -z 78829 ']' 00:15:09.155 19:13:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:09.155 19:13:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:09.155 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:09.155 19:13:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:09.155 19:13:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:09.155 19:13:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:09.155 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:09.155 Zero copy mechanism will not be used. 00:15:09.155 [2024-11-27 19:13:18.591002] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:15:09.155 [2024-11-27 19:13:18.591134] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78829 ] 00:15:09.155 [2024-11-27 19:13:18.769460] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:09.415 [2024-11-27 19:13:18.879457] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:09.675 [2024-11-27 19:13:19.067465] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:09.675 [2024-11-27 19:13:19.067598] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:09.939 19:13:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:09.939 19:13:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # return 0 00:15:09.939 19:13:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:09.939 19:13:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:09.940 19:13:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.940 19:13:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:09.940 BaseBdev1_malloc 00:15:09.940 19:13:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.940 19:13:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:09.940 19:13:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.940 19:13:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:09.940 [2024-11-27 19:13:19.465646] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:09.940 [2024-11-27 19:13:19.465809] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:09.940 [2024-11-27 19:13:19.465848] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:09.940 [2024-11-27 19:13:19.465878] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:09.940 [2024-11-27 19:13:19.467900] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:09.940 [2024-11-27 19:13:19.467979] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:09.940 BaseBdev1 00:15:09.940 19:13:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.940 19:13:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:09.940 19:13:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:09.940 19:13:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.940 19:13:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:09.940 BaseBdev2_malloc 00:15:09.940 19:13:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.940 19:13:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:09.940 19:13:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.940 19:13:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:09.940 [2024-11-27 19:13:19.518929] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:09.940 [2024-11-27 19:13:19.519043] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:09.940 [2024-11-27 19:13:19.519070] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:09.940 [2024-11-27 19:13:19.519081] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:09.940 [2024-11-27 19:13:19.521065] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:09.940 [2024-11-27 19:13:19.521116] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:09.940 BaseBdev2 00:15:09.940 19:13:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.940 19:13:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:09.940 19:13:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:09.940 19:13:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.940 19:13:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:10.212 BaseBdev3_malloc 00:15:10.212 19:13:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.212 19:13:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:15:10.212 19:13:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.212 19:13:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:10.212 [2024-11-27 19:13:19.606804] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:15:10.212 [2024-11-27 19:13:19.606917] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:10.212 [2024-11-27 19:13:19.606954] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:10.212 [2024-11-27 19:13:19.606984] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:10.212 [2024-11-27 19:13:19.608967] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:10.212 [2024-11-27 19:13:19.609047] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:10.212 BaseBdev3 00:15:10.212 19:13:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.212 19:13:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:10.212 19:13:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:15:10.212 19:13:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.212 19:13:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:10.212 BaseBdev4_malloc 00:15:10.212 19:13:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.212 19:13:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:15:10.212 19:13:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.212 19:13:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:10.212 [2024-11-27 19:13:19.655622] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:15:10.212 [2024-11-27 19:13:19.655680] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:10.212 [2024-11-27 19:13:19.655708] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:10.212 [2024-11-27 19:13:19.655720] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:10.212 [2024-11-27 19:13:19.657599] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:10.213 [2024-11-27 19:13:19.657641] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:15:10.213 BaseBdev4 00:15:10.213 19:13:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.213 19:13:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:10.213 19:13:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.213 19:13:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:10.213 spare_malloc 00:15:10.213 19:13:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.213 19:13:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:10.213 19:13:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.213 19:13:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:10.213 spare_delay 00:15:10.213 19:13:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.213 19:13:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:10.213 19:13:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.213 19:13:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:10.213 [2024-11-27 19:13:19.719678] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:10.213 [2024-11-27 19:13:19.719768] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:10.213 [2024-11-27 19:13:19.719784] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:15:10.213 [2024-11-27 19:13:19.719795] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:10.213 [2024-11-27 19:13:19.721722] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:10.213 [2024-11-27 19:13:19.721759] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:10.213 spare 00:15:10.213 19:13:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.213 19:13:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:15:10.213 19:13:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.213 19:13:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:10.213 [2024-11-27 19:13:19.731714] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:10.213 [2024-11-27 19:13:19.733521] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:10.213 [2024-11-27 19:13:19.733625] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:10.213 [2024-11-27 19:13:19.733721] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:10.213 [2024-11-27 19:13:19.733833] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:10.213 [2024-11-27 19:13:19.733875] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:15:10.213 [2024-11-27 19:13:19.734141] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:15:10.213 [2024-11-27 19:13:19.734346] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:10.213 [2024-11-27 19:13:19.734392] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:10.213 [2024-11-27 19:13:19.734564] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:10.213 19:13:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.213 19:13:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:15:10.213 19:13:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:10.213 19:13:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:10.213 19:13:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:10.213 19:13:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:10.213 19:13:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:10.213 19:13:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:10.213 19:13:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:10.213 19:13:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:10.213 19:13:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:10.213 19:13:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:10.213 19:13:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:10.213 19:13:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.213 19:13:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:10.213 19:13:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.213 19:13:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:10.213 "name": "raid_bdev1", 00:15:10.213 "uuid": "dbe4247c-15a3-4306-8783-d5290c4d9034", 00:15:10.213 "strip_size_kb": 0, 00:15:10.213 "state": "online", 00:15:10.213 "raid_level": "raid1", 00:15:10.213 "superblock": false, 00:15:10.213 "num_base_bdevs": 4, 00:15:10.213 "num_base_bdevs_discovered": 4, 00:15:10.213 "num_base_bdevs_operational": 4, 00:15:10.213 "base_bdevs_list": [ 00:15:10.213 { 00:15:10.213 "name": "BaseBdev1", 00:15:10.213 "uuid": "2b7da4f9-d8de-5b10-935c-2a8ae4d4c77c", 00:15:10.213 "is_configured": true, 00:15:10.213 "data_offset": 0, 00:15:10.213 "data_size": 65536 00:15:10.213 }, 00:15:10.213 { 00:15:10.213 "name": "BaseBdev2", 00:15:10.213 "uuid": "a21f490a-6644-5380-8de2-7abe08cd500b", 00:15:10.213 "is_configured": true, 00:15:10.213 "data_offset": 0, 00:15:10.213 "data_size": 65536 00:15:10.213 }, 00:15:10.213 { 00:15:10.213 "name": "BaseBdev3", 00:15:10.213 "uuid": "b2ead4f7-f3ff-5854-920f-0169fab53a43", 00:15:10.213 "is_configured": true, 00:15:10.213 "data_offset": 0, 00:15:10.213 "data_size": 65536 00:15:10.213 }, 00:15:10.213 { 00:15:10.213 "name": "BaseBdev4", 00:15:10.213 "uuid": "90f16727-0d6b-55b9-80b4-b25d3d0f5d6c", 00:15:10.213 "is_configured": true, 00:15:10.213 "data_offset": 0, 00:15:10.213 "data_size": 65536 00:15:10.213 } 00:15:10.213 ] 00:15:10.213 }' 00:15:10.213 19:13:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:10.213 19:13:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:10.784 19:13:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:10.784 19:13:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:10.784 19:13:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.784 19:13:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:10.784 [2024-11-27 19:13:20.179246] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:10.784 19:13:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.784 19:13:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:15:10.784 19:13:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:10.784 19:13:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.784 19:13:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:10.784 19:13:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:10.784 19:13:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.784 19:13:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:15:10.784 19:13:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:15:10.784 19:13:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:15:10.784 19:13:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:10.784 19:13:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.784 19:13:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:10.784 [2024-11-27 19:13:20.266790] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:10.784 19:13:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.784 19:13:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:10.784 19:13:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:10.784 19:13:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:10.784 19:13:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:10.784 19:13:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:10.784 19:13:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:10.784 19:13:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:10.785 19:13:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:10.785 19:13:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:10.785 19:13:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:10.785 19:13:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:10.785 19:13:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:10.785 19:13:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.785 19:13:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:10.785 19:13:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.785 19:13:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:10.785 "name": "raid_bdev1", 00:15:10.785 "uuid": "dbe4247c-15a3-4306-8783-d5290c4d9034", 00:15:10.785 "strip_size_kb": 0, 00:15:10.785 "state": "online", 00:15:10.785 "raid_level": "raid1", 00:15:10.785 "superblock": false, 00:15:10.785 "num_base_bdevs": 4, 00:15:10.785 "num_base_bdevs_discovered": 3, 00:15:10.785 "num_base_bdevs_operational": 3, 00:15:10.785 "base_bdevs_list": [ 00:15:10.785 { 00:15:10.785 "name": null, 00:15:10.785 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:10.785 "is_configured": false, 00:15:10.785 "data_offset": 0, 00:15:10.785 "data_size": 65536 00:15:10.785 }, 00:15:10.785 { 00:15:10.785 "name": "BaseBdev2", 00:15:10.785 "uuid": "a21f490a-6644-5380-8de2-7abe08cd500b", 00:15:10.785 "is_configured": true, 00:15:10.785 "data_offset": 0, 00:15:10.785 "data_size": 65536 00:15:10.785 }, 00:15:10.785 { 00:15:10.785 "name": "BaseBdev3", 00:15:10.785 "uuid": "b2ead4f7-f3ff-5854-920f-0169fab53a43", 00:15:10.785 "is_configured": true, 00:15:10.785 "data_offset": 0, 00:15:10.785 "data_size": 65536 00:15:10.785 }, 00:15:10.785 { 00:15:10.785 "name": "BaseBdev4", 00:15:10.785 "uuid": "90f16727-0d6b-55b9-80b4-b25d3d0f5d6c", 00:15:10.785 "is_configured": true, 00:15:10.785 "data_offset": 0, 00:15:10.785 "data_size": 65536 00:15:10.785 } 00:15:10.785 ] 00:15:10.785 }' 00:15:10.785 19:13:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:10.785 19:13:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:10.785 [2024-11-27 19:13:20.361971] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:15:10.785 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:10.785 Zero copy mechanism will not be used. 00:15:10.785 Running I/O for 60 seconds... 00:15:11.356 19:13:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:11.356 19:13:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.356 19:13:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:11.356 [2024-11-27 19:13:20.750346] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:11.356 19:13:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.356 19:13:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:11.356 [2024-11-27 19:13:20.808761] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:15:11.356 [2024-11-27 19:13:20.810650] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:11.356 [2024-11-27 19:13:20.918958] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:11.356 [2024-11-27 19:13:20.919371] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:11.617 [2024-11-27 19:13:21.041740] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:11.617 [2024-11-27 19:13:21.042398] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:11.877 [2024-11-27 19:13:21.368237] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:15:12.138 158.00 IOPS, 474.00 MiB/s [2024-11-27T19:13:21.774Z] [2024-11-27 19:13:21.587667] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:15:12.138 [2024-11-27 19:13:21.588019] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:15:12.398 19:13:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:12.398 19:13:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:12.398 19:13:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:12.398 19:13:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:12.398 19:13:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:12.398 19:13:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:12.398 19:13:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:12.398 19:13:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.398 19:13:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:12.398 [2024-11-27 19:13:21.824022] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:15:12.398 19:13:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.398 19:13:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:12.398 "name": "raid_bdev1", 00:15:12.398 "uuid": "dbe4247c-15a3-4306-8783-d5290c4d9034", 00:15:12.398 "strip_size_kb": 0, 00:15:12.398 "state": "online", 00:15:12.398 "raid_level": "raid1", 00:15:12.398 "superblock": false, 00:15:12.398 "num_base_bdevs": 4, 00:15:12.398 "num_base_bdevs_discovered": 4, 00:15:12.398 "num_base_bdevs_operational": 4, 00:15:12.398 "process": { 00:15:12.398 "type": "rebuild", 00:15:12.398 "target": "spare", 00:15:12.398 "progress": { 00:15:12.398 "blocks": 12288, 00:15:12.399 "percent": 18 00:15:12.399 } 00:15:12.399 }, 00:15:12.399 "base_bdevs_list": [ 00:15:12.399 { 00:15:12.399 "name": "spare", 00:15:12.399 "uuid": "5af05f5f-51c8-5a04-8633-91a7a4e6049e", 00:15:12.399 "is_configured": true, 00:15:12.399 "data_offset": 0, 00:15:12.399 "data_size": 65536 00:15:12.399 }, 00:15:12.399 { 00:15:12.399 "name": "BaseBdev2", 00:15:12.399 "uuid": "a21f490a-6644-5380-8de2-7abe08cd500b", 00:15:12.399 "is_configured": true, 00:15:12.399 "data_offset": 0, 00:15:12.399 "data_size": 65536 00:15:12.399 }, 00:15:12.399 { 00:15:12.399 "name": "BaseBdev3", 00:15:12.399 "uuid": "b2ead4f7-f3ff-5854-920f-0169fab53a43", 00:15:12.399 "is_configured": true, 00:15:12.399 "data_offset": 0, 00:15:12.399 "data_size": 65536 00:15:12.399 }, 00:15:12.399 { 00:15:12.399 "name": "BaseBdev4", 00:15:12.399 "uuid": "90f16727-0d6b-55b9-80b4-b25d3d0f5d6c", 00:15:12.399 "is_configured": true, 00:15:12.399 "data_offset": 0, 00:15:12.399 "data_size": 65536 00:15:12.399 } 00:15:12.399 ] 00:15:12.399 }' 00:15:12.399 19:13:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:12.399 19:13:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:12.399 19:13:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:12.399 19:13:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:12.399 19:13:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:12.399 19:13:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.399 19:13:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:12.399 [2024-11-27 19:13:21.946752] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:12.399 [2024-11-27 19:13:21.947337] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:15:12.659 [2024-11-27 19:13:22.066888] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:12.659 [2024-11-27 19:13:22.071182] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:12.659 [2024-11-27 19:13:22.071233] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:12.659 [2024-11-27 19:13:22.071246] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:12.659 [2024-11-27 19:13:22.098414] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:15:12.659 19:13:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.659 19:13:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:12.659 19:13:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:12.659 19:13:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:12.659 19:13:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:12.659 19:13:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:12.659 19:13:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:12.660 19:13:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:12.660 19:13:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:12.660 19:13:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:12.660 19:13:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:12.660 19:13:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:12.660 19:13:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.660 19:13:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:12.660 19:13:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:12.660 19:13:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.660 19:13:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:12.660 "name": "raid_bdev1", 00:15:12.660 "uuid": "dbe4247c-15a3-4306-8783-d5290c4d9034", 00:15:12.660 "strip_size_kb": 0, 00:15:12.660 "state": "online", 00:15:12.660 "raid_level": "raid1", 00:15:12.660 "superblock": false, 00:15:12.660 "num_base_bdevs": 4, 00:15:12.660 "num_base_bdevs_discovered": 3, 00:15:12.660 "num_base_bdevs_operational": 3, 00:15:12.660 "base_bdevs_list": [ 00:15:12.660 { 00:15:12.660 "name": null, 00:15:12.660 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:12.660 "is_configured": false, 00:15:12.660 "data_offset": 0, 00:15:12.660 "data_size": 65536 00:15:12.660 }, 00:15:12.660 { 00:15:12.660 "name": "BaseBdev2", 00:15:12.660 "uuid": "a21f490a-6644-5380-8de2-7abe08cd500b", 00:15:12.660 "is_configured": true, 00:15:12.660 "data_offset": 0, 00:15:12.660 "data_size": 65536 00:15:12.660 }, 00:15:12.660 { 00:15:12.660 "name": "BaseBdev3", 00:15:12.660 "uuid": "b2ead4f7-f3ff-5854-920f-0169fab53a43", 00:15:12.660 "is_configured": true, 00:15:12.660 "data_offset": 0, 00:15:12.660 "data_size": 65536 00:15:12.660 }, 00:15:12.660 { 00:15:12.660 "name": "BaseBdev4", 00:15:12.660 "uuid": "90f16727-0d6b-55b9-80b4-b25d3d0f5d6c", 00:15:12.660 "is_configured": true, 00:15:12.660 "data_offset": 0, 00:15:12.660 "data_size": 65536 00:15:12.660 } 00:15:12.660 ] 00:15:12.660 }' 00:15:12.660 19:13:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:12.660 19:13:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:13.181 135.50 IOPS, 406.50 MiB/s [2024-11-27T19:13:22.817Z] 19:13:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:13.181 19:13:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:13.181 19:13:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:13.181 19:13:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:13.181 19:13:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:13.181 19:13:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:13.181 19:13:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.181 19:13:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:13.181 19:13:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:13.181 19:13:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.181 19:13:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:13.181 "name": "raid_bdev1", 00:15:13.181 "uuid": "dbe4247c-15a3-4306-8783-d5290c4d9034", 00:15:13.181 "strip_size_kb": 0, 00:15:13.181 "state": "online", 00:15:13.181 "raid_level": "raid1", 00:15:13.181 "superblock": false, 00:15:13.181 "num_base_bdevs": 4, 00:15:13.181 "num_base_bdevs_discovered": 3, 00:15:13.181 "num_base_bdevs_operational": 3, 00:15:13.181 "base_bdevs_list": [ 00:15:13.181 { 00:15:13.181 "name": null, 00:15:13.181 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:13.181 "is_configured": false, 00:15:13.181 "data_offset": 0, 00:15:13.181 "data_size": 65536 00:15:13.181 }, 00:15:13.181 { 00:15:13.181 "name": "BaseBdev2", 00:15:13.181 "uuid": "a21f490a-6644-5380-8de2-7abe08cd500b", 00:15:13.181 "is_configured": true, 00:15:13.181 "data_offset": 0, 00:15:13.181 "data_size": 65536 00:15:13.181 }, 00:15:13.181 { 00:15:13.181 "name": "BaseBdev3", 00:15:13.181 "uuid": "b2ead4f7-f3ff-5854-920f-0169fab53a43", 00:15:13.181 "is_configured": true, 00:15:13.181 "data_offset": 0, 00:15:13.181 "data_size": 65536 00:15:13.181 }, 00:15:13.181 { 00:15:13.181 "name": "BaseBdev4", 00:15:13.181 "uuid": "90f16727-0d6b-55b9-80b4-b25d3d0f5d6c", 00:15:13.181 "is_configured": true, 00:15:13.181 "data_offset": 0, 00:15:13.181 "data_size": 65536 00:15:13.181 } 00:15:13.181 ] 00:15:13.181 }' 00:15:13.181 19:13:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:13.181 19:13:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:13.181 19:13:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:13.181 19:13:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:13.181 19:13:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:13.181 19:13:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.181 19:13:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:13.181 [2024-11-27 19:13:22.713146] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:13.181 19:13:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.181 19:13:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:13.181 [2024-11-27 19:13:22.775424] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:15:13.181 [2024-11-27 19:13:22.777344] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:13.442 [2024-11-27 19:13:22.903986] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:13.442 [2024-11-27 19:13:22.904576] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:13.442 [2024-11-27 19:13:23.021151] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:13.442 [2024-11-27 19:13:23.021890] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:14.012 [2024-11-27 19:13:23.355034] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:15:14.012 [2024-11-27 19:13:23.356496] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:15:14.012 162.00 IOPS, 486.00 MiB/s [2024-11-27T19:13:23.648Z] [2024-11-27 19:13:23.594668] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:15:14.272 19:13:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:14.272 19:13:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:14.272 19:13:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:14.272 19:13:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:14.272 19:13:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:14.272 19:13:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:14.272 19:13:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.272 19:13:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:14.272 19:13:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:14.272 19:13:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.272 19:13:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:14.272 "name": "raid_bdev1", 00:15:14.272 "uuid": "dbe4247c-15a3-4306-8783-d5290c4d9034", 00:15:14.272 "strip_size_kb": 0, 00:15:14.272 "state": "online", 00:15:14.272 "raid_level": "raid1", 00:15:14.272 "superblock": false, 00:15:14.272 "num_base_bdevs": 4, 00:15:14.272 "num_base_bdevs_discovered": 4, 00:15:14.272 "num_base_bdevs_operational": 4, 00:15:14.272 "process": { 00:15:14.272 "type": "rebuild", 00:15:14.272 "target": "spare", 00:15:14.272 "progress": { 00:15:14.272 "blocks": 12288, 00:15:14.272 "percent": 18 00:15:14.272 } 00:15:14.272 }, 00:15:14.272 "base_bdevs_list": [ 00:15:14.272 { 00:15:14.272 "name": "spare", 00:15:14.272 "uuid": "5af05f5f-51c8-5a04-8633-91a7a4e6049e", 00:15:14.272 "is_configured": true, 00:15:14.272 "data_offset": 0, 00:15:14.272 "data_size": 65536 00:15:14.272 }, 00:15:14.272 { 00:15:14.272 "name": "BaseBdev2", 00:15:14.272 "uuid": "a21f490a-6644-5380-8de2-7abe08cd500b", 00:15:14.272 "is_configured": true, 00:15:14.272 "data_offset": 0, 00:15:14.272 "data_size": 65536 00:15:14.272 }, 00:15:14.272 { 00:15:14.272 "name": "BaseBdev3", 00:15:14.272 "uuid": "b2ead4f7-f3ff-5854-920f-0169fab53a43", 00:15:14.272 "is_configured": true, 00:15:14.272 "data_offset": 0, 00:15:14.272 "data_size": 65536 00:15:14.272 }, 00:15:14.272 { 00:15:14.272 "name": "BaseBdev4", 00:15:14.272 "uuid": "90f16727-0d6b-55b9-80b4-b25d3d0f5d6c", 00:15:14.272 "is_configured": true, 00:15:14.272 "data_offset": 0, 00:15:14.272 "data_size": 65536 00:15:14.272 } 00:15:14.272 ] 00:15:14.272 }' 00:15:14.272 19:13:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:14.272 19:13:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:14.272 19:13:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:14.533 19:13:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:14.533 19:13:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:15:14.533 19:13:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:15:14.533 19:13:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:15:14.533 19:13:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:15:14.533 19:13:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:14.533 19:13:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.533 19:13:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:14.533 [2024-11-27 19:13:23.916116] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:14.533 [2024-11-27 19:13:23.995860] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:15:14.533 [2024-11-27 19:13:23.995903] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:15:14.533 19:13:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.533 19:13:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:15:14.533 19:13:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:15:14.533 19:13:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:14.533 19:13:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:14.533 19:13:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:14.533 19:13:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:14.533 19:13:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:14.533 19:13:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:14.533 19:13:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.533 19:13:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:14.533 19:13:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:14.533 19:13:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.533 19:13:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:14.533 "name": "raid_bdev1", 00:15:14.533 "uuid": "dbe4247c-15a3-4306-8783-d5290c4d9034", 00:15:14.533 "strip_size_kb": 0, 00:15:14.533 "state": "online", 00:15:14.533 "raid_level": "raid1", 00:15:14.533 "superblock": false, 00:15:14.533 "num_base_bdevs": 4, 00:15:14.533 "num_base_bdevs_discovered": 3, 00:15:14.533 "num_base_bdevs_operational": 3, 00:15:14.533 "process": { 00:15:14.533 "type": "rebuild", 00:15:14.533 "target": "spare", 00:15:14.533 "progress": { 00:15:14.533 "blocks": 16384, 00:15:14.533 "percent": 25 00:15:14.533 } 00:15:14.533 }, 00:15:14.533 "base_bdevs_list": [ 00:15:14.533 { 00:15:14.533 "name": "spare", 00:15:14.533 "uuid": "5af05f5f-51c8-5a04-8633-91a7a4e6049e", 00:15:14.533 "is_configured": true, 00:15:14.533 "data_offset": 0, 00:15:14.533 "data_size": 65536 00:15:14.533 }, 00:15:14.533 { 00:15:14.533 "name": null, 00:15:14.533 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:14.533 "is_configured": false, 00:15:14.533 "data_offset": 0, 00:15:14.534 "data_size": 65536 00:15:14.534 }, 00:15:14.534 { 00:15:14.534 "name": "BaseBdev3", 00:15:14.534 "uuid": "b2ead4f7-f3ff-5854-920f-0169fab53a43", 00:15:14.534 "is_configured": true, 00:15:14.534 "data_offset": 0, 00:15:14.534 "data_size": 65536 00:15:14.534 }, 00:15:14.534 { 00:15:14.534 "name": "BaseBdev4", 00:15:14.534 "uuid": "90f16727-0d6b-55b9-80b4-b25d3d0f5d6c", 00:15:14.534 "is_configured": true, 00:15:14.534 "data_offset": 0, 00:15:14.534 "data_size": 65536 00:15:14.534 } 00:15:14.534 ] 00:15:14.534 }' 00:15:14.534 19:13:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:14.534 19:13:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:14.534 19:13:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:14.534 19:13:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:14.534 19:13:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=486 00:15:14.534 19:13:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:14.534 19:13:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:14.534 19:13:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:14.534 19:13:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:14.534 19:13:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:14.534 19:13:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:14.534 19:13:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:14.534 19:13:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.534 19:13:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:14.534 19:13:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:14.534 19:13:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.794 19:13:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:14.794 "name": "raid_bdev1", 00:15:14.794 "uuid": "dbe4247c-15a3-4306-8783-d5290c4d9034", 00:15:14.794 "strip_size_kb": 0, 00:15:14.794 "state": "online", 00:15:14.794 "raid_level": "raid1", 00:15:14.794 "superblock": false, 00:15:14.794 "num_base_bdevs": 4, 00:15:14.795 "num_base_bdevs_discovered": 3, 00:15:14.795 "num_base_bdevs_operational": 3, 00:15:14.795 "process": { 00:15:14.795 "type": "rebuild", 00:15:14.795 "target": "spare", 00:15:14.795 "progress": { 00:15:14.795 "blocks": 16384, 00:15:14.795 "percent": 25 00:15:14.795 } 00:15:14.795 }, 00:15:14.795 "base_bdevs_list": [ 00:15:14.795 { 00:15:14.795 "name": "spare", 00:15:14.795 "uuid": "5af05f5f-51c8-5a04-8633-91a7a4e6049e", 00:15:14.795 "is_configured": true, 00:15:14.795 "data_offset": 0, 00:15:14.795 "data_size": 65536 00:15:14.795 }, 00:15:14.795 { 00:15:14.795 "name": null, 00:15:14.795 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:14.795 "is_configured": false, 00:15:14.795 "data_offset": 0, 00:15:14.795 "data_size": 65536 00:15:14.795 }, 00:15:14.795 { 00:15:14.795 "name": "BaseBdev3", 00:15:14.795 "uuid": "b2ead4f7-f3ff-5854-920f-0169fab53a43", 00:15:14.795 "is_configured": true, 00:15:14.795 "data_offset": 0, 00:15:14.795 "data_size": 65536 00:15:14.795 }, 00:15:14.795 { 00:15:14.795 "name": "BaseBdev4", 00:15:14.795 "uuid": "90f16727-0d6b-55b9-80b4-b25d3d0f5d6c", 00:15:14.795 "is_configured": true, 00:15:14.795 "data_offset": 0, 00:15:14.795 "data_size": 65536 00:15:14.795 } 00:15:14.795 ] 00:15:14.795 }' 00:15:14.795 19:13:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:14.795 19:13:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:14.795 19:13:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:14.795 [2024-11-27 19:13:24.272576] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:15:14.795 19:13:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:14.795 19:13:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:15.054 142.50 IOPS, 427.50 MiB/s [2024-11-27T19:13:24.690Z] [2024-11-27 19:13:24.536114] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:15:15.314 [2024-11-27 19:13:24.869084] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:15:15.885 19:13:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:15.885 19:13:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:15.885 19:13:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:15.885 19:13:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:15.885 19:13:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:15.885 19:13:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:15.885 19:13:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:15.885 19:13:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:15.885 19:13:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.885 19:13:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:15.885 19:13:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.885 [2024-11-27 19:13:25.344811] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:15:15.885 19:13:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:15.885 "name": "raid_bdev1", 00:15:15.885 "uuid": "dbe4247c-15a3-4306-8783-d5290c4d9034", 00:15:15.885 "strip_size_kb": 0, 00:15:15.885 "state": "online", 00:15:15.885 "raid_level": "raid1", 00:15:15.885 "superblock": false, 00:15:15.885 "num_base_bdevs": 4, 00:15:15.885 "num_base_bdevs_discovered": 3, 00:15:15.885 "num_base_bdevs_operational": 3, 00:15:15.885 "process": { 00:15:15.885 "type": "rebuild", 00:15:15.885 "target": "spare", 00:15:15.885 "progress": { 00:15:15.885 "blocks": 30720, 00:15:15.885 "percent": 46 00:15:15.885 } 00:15:15.885 }, 00:15:15.885 "base_bdevs_list": [ 00:15:15.885 { 00:15:15.885 "name": "spare", 00:15:15.885 "uuid": "5af05f5f-51c8-5a04-8633-91a7a4e6049e", 00:15:15.885 "is_configured": true, 00:15:15.885 "data_offset": 0, 00:15:15.885 "data_size": 65536 00:15:15.885 }, 00:15:15.885 { 00:15:15.885 "name": null, 00:15:15.885 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:15.885 "is_configured": false, 00:15:15.885 "data_offset": 0, 00:15:15.885 "data_size": 65536 00:15:15.885 }, 00:15:15.885 { 00:15:15.885 "name": "BaseBdev3", 00:15:15.885 "uuid": "b2ead4f7-f3ff-5854-920f-0169fab53a43", 00:15:15.885 "is_configured": true, 00:15:15.885 "data_offset": 0, 00:15:15.885 "data_size": 65536 00:15:15.885 }, 00:15:15.885 { 00:15:15.885 "name": "BaseBdev4", 00:15:15.885 "uuid": "90f16727-0d6b-55b9-80b4-b25d3d0f5d6c", 00:15:15.885 "is_configured": true, 00:15:15.885 "data_offset": 0, 00:15:15.885 "data_size": 65536 00:15:15.885 } 00:15:15.885 ] 00:15:15.885 }' 00:15:15.885 19:13:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:15.885 124.00 IOPS, 372.00 MiB/s [2024-11-27T19:13:25.521Z] 19:13:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:15.885 19:13:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:15.885 19:13:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:15.885 19:13:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:16.145 [2024-11-27 19:13:25.675658] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:15:16.715 [2024-11-27 19:13:26.100740] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:15:16.976 113.50 IOPS, 340.50 MiB/s [2024-11-27T19:13:26.612Z] 19:13:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:16.976 19:13:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:16.976 19:13:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:16.976 19:13:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:16.976 19:13:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:16.976 19:13:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:16.976 19:13:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:16.976 19:13:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.976 19:13:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:16.976 19:13:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:16.976 19:13:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.976 19:13:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:16.976 "name": "raid_bdev1", 00:15:16.976 "uuid": "dbe4247c-15a3-4306-8783-d5290c4d9034", 00:15:16.976 "strip_size_kb": 0, 00:15:16.976 "state": "online", 00:15:16.976 "raid_level": "raid1", 00:15:16.976 "superblock": false, 00:15:16.976 "num_base_bdevs": 4, 00:15:16.976 "num_base_bdevs_discovered": 3, 00:15:16.976 "num_base_bdevs_operational": 3, 00:15:16.976 "process": { 00:15:16.976 "type": "rebuild", 00:15:16.976 "target": "spare", 00:15:16.976 "progress": { 00:15:16.976 "blocks": 51200, 00:15:16.976 "percent": 78 00:15:16.976 } 00:15:16.976 }, 00:15:16.976 "base_bdevs_list": [ 00:15:16.976 { 00:15:16.976 "name": "spare", 00:15:16.976 "uuid": "5af05f5f-51c8-5a04-8633-91a7a4e6049e", 00:15:16.976 "is_configured": true, 00:15:16.976 "data_offset": 0, 00:15:16.976 "data_size": 65536 00:15:16.976 }, 00:15:16.976 { 00:15:16.976 "name": null, 00:15:16.976 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:16.976 "is_configured": false, 00:15:16.976 "data_offset": 0, 00:15:16.976 "data_size": 65536 00:15:16.976 }, 00:15:16.976 { 00:15:16.976 "name": "BaseBdev3", 00:15:16.976 "uuid": "b2ead4f7-f3ff-5854-920f-0169fab53a43", 00:15:16.976 "is_configured": true, 00:15:16.976 "data_offset": 0, 00:15:16.976 "data_size": 65536 00:15:16.976 }, 00:15:16.976 { 00:15:16.976 "name": "BaseBdev4", 00:15:16.976 "uuid": "90f16727-0d6b-55b9-80b4-b25d3d0f5d6c", 00:15:16.976 "is_configured": true, 00:15:16.976 "data_offset": 0, 00:15:16.976 "data_size": 65536 00:15:16.976 } 00:15:16.976 ] 00:15:16.976 }' 00:15:16.976 19:13:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:16.976 19:13:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:16.976 19:13:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:16.976 [2024-11-27 19:13:26.545557] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:15:16.976 19:13:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:16.976 19:13:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:17.236 [2024-11-27 19:13:26.764847] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:15:17.805 [2024-11-27 19:13:27.304391] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:17.805 102.00 IOPS, 306.00 MiB/s [2024-11-27T19:13:27.441Z] [2024-11-27 19:13:27.409900] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:17.805 [2024-11-27 19:13:27.414305] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:18.065 19:13:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:18.065 19:13:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:18.065 19:13:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:18.065 19:13:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:18.065 19:13:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:18.065 19:13:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:18.065 19:13:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:18.065 19:13:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.065 19:13:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:18.065 19:13:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:18.065 19:13:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.065 19:13:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:18.065 "name": "raid_bdev1", 00:15:18.065 "uuid": "dbe4247c-15a3-4306-8783-d5290c4d9034", 00:15:18.065 "strip_size_kb": 0, 00:15:18.065 "state": "online", 00:15:18.065 "raid_level": "raid1", 00:15:18.065 "superblock": false, 00:15:18.065 "num_base_bdevs": 4, 00:15:18.065 "num_base_bdevs_discovered": 3, 00:15:18.065 "num_base_bdevs_operational": 3, 00:15:18.065 "base_bdevs_list": [ 00:15:18.065 { 00:15:18.065 "name": "spare", 00:15:18.065 "uuid": "5af05f5f-51c8-5a04-8633-91a7a4e6049e", 00:15:18.065 "is_configured": true, 00:15:18.065 "data_offset": 0, 00:15:18.065 "data_size": 65536 00:15:18.065 }, 00:15:18.065 { 00:15:18.065 "name": null, 00:15:18.065 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:18.065 "is_configured": false, 00:15:18.065 "data_offset": 0, 00:15:18.065 "data_size": 65536 00:15:18.065 }, 00:15:18.065 { 00:15:18.065 "name": "BaseBdev3", 00:15:18.065 "uuid": "b2ead4f7-f3ff-5854-920f-0169fab53a43", 00:15:18.065 "is_configured": true, 00:15:18.065 "data_offset": 0, 00:15:18.065 "data_size": 65536 00:15:18.065 }, 00:15:18.065 { 00:15:18.065 "name": "BaseBdev4", 00:15:18.065 "uuid": "90f16727-0d6b-55b9-80b4-b25d3d0f5d6c", 00:15:18.065 "is_configured": true, 00:15:18.066 "data_offset": 0, 00:15:18.066 "data_size": 65536 00:15:18.066 } 00:15:18.066 ] 00:15:18.066 }' 00:15:18.066 19:13:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:18.066 19:13:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:18.066 19:13:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:18.326 19:13:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:18.326 19:13:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:15:18.326 19:13:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:18.326 19:13:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:18.326 19:13:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:18.326 19:13:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:18.326 19:13:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:18.326 19:13:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:18.326 19:13:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:18.326 19:13:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.326 19:13:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:18.326 19:13:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.326 19:13:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:18.326 "name": "raid_bdev1", 00:15:18.326 "uuid": "dbe4247c-15a3-4306-8783-d5290c4d9034", 00:15:18.326 "strip_size_kb": 0, 00:15:18.326 "state": "online", 00:15:18.326 "raid_level": "raid1", 00:15:18.326 "superblock": false, 00:15:18.326 "num_base_bdevs": 4, 00:15:18.326 "num_base_bdevs_discovered": 3, 00:15:18.326 "num_base_bdevs_operational": 3, 00:15:18.326 "base_bdevs_list": [ 00:15:18.326 { 00:15:18.326 "name": "spare", 00:15:18.326 "uuid": "5af05f5f-51c8-5a04-8633-91a7a4e6049e", 00:15:18.326 "is_configured": true, 00:15:18.326 "data_offset": 0, 00:15:18.326 "data_size": 65536 00:15:18.326 }, 00:15:18.326 { 00:15:18.326 "name": null, 00:15:18.326 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:18.326 "is_configured": false, 00:15:18.326 "data_offset": 0, 00:15:18.326 "data_size": 65536 00:15:18.326 }, 00:15:18.326 { 00:15:18.326 "name": "BaseBdev3", 00:15:18.326 "uuid": "b2ead4f7-f3ff-5854-920f-0169fab53a43", 00:15:18.326 "is_configured": true, 00:15:18.326 "data_offset": 0, 00:15:18.326 "data_size": 65536 00:15:18.326 }, 00:15:18.326 { 00:15:18.326 "name": "BaseBdev4", 00:15:18.326 "uuid": "90f16727-0d6b-55b9-80b4-b25d3d0f5d6c", 00:15:18.326 "is_configured": true, 00:15:18.326 "data_offset": 0, 00:15:18.326 "data_size": 65536 00:15:18.326 } 00:15:18.326 ] 00:15:18.326 }' 00:15:18.326 19:13:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:18.326 19:13:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:18.326 19:13:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:18.326 19:13:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:18.326 19:13:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:18.326 19:13:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:18.326 19:13:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:18.326 19:13:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:18.326 19:13:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:18.326 19:13:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:18.326 19:13:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:18.326 19:13:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:18.326 19:13:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:18.326 19:13:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:18.326 19:13:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:18.326 19:13:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:18.326 19:13:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.326 19:13:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:18.326 19:13:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.326 19:13:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:18.326 "name": "raid_bdev1", 00:15:18.326 "uuid": "dbe4247c-15a3-4306-8783-d5290c4d9034", 00:15:18.326 "strip_size_kb": 0, 00:15:18.326 "state": "online", 00:15:18.326 "raid_level": "raid1", 00:15:18.326 "superblock": false, 00:15:18.326 "num_base_bdevs": 4, 00:15:18.326 "num_base_bdevs_discovered": 3, 00:15:18.326 "num_base_bdevs_operational": 3, 00:15:18.326 "base_bdevs_list": [ 00:15:18.326 { 00:15:18.326 "name": "spare", 00:15:18.326 "uuid": "5af05f5f-51c8-5a04-8633-91a7a4e6049e", 00:15:18.326 "is_configured": true, 00:15:18.326 "data_offset": 0, 00:15:18.326 "data_size": 65536 00:15:18.326 }, 00:15:18.326 { 00:15:18.326 "name": null, 00:15:18.326 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:18.326 "is_configured": false, 00:15:18.326 "data_offset": 0, 00:15:18.326 "data_size": 65536 00:15:18.326 }, 00:15:18.326 { 00:15:18.326 "name": "BaseBdev3", 00:15:18.326 "uuid": "b2ead4f7-f3ff-5854-920f-0169fab53a43", 00:15:18.326 "is_configured": true, 00:15:18.326 "data_offset": 0, 00:15:18.326 "data_size": 65536 00:15:18.326 }, 00:15:18.326 { 00:15:18.326 "name": "BaseBdev4", 00:15:18.326 "uuid": "90f16727-0d6b-55b9-80b4-b25d3d0f5d6c", 00:15:18.326 "is_configured": true, 00:15:18.326 "data_offset": 0, 00:15:18.326 "data_size": 65536 00:15:18.326 } 00:15:18.326 ] 00:15:18.326 }' 00:15:18.326 19:13:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:18.326 19:13:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:18.896 19:13:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:18.896 19:13:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.896 19:13:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:18.896 [2024-11-27 19:13:28.285202] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:18.896 [2024-11-27 19:13:28.285245] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:18.896 95.25 IOPS, 285.75 MiB/s 00:15:18.896 Latency(us) 00:15:18.896 [2024-11-27T19:13:28.532Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:18.896 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:15:18.896 raid_bdev1 : 8.04 94.97 284.91 0.00 0.00 15107.29 332.69 117220.72 00:15:18.896 [2024-11-27T19:13:28.532Z] =================================================================================================================== 00:15:18.896 [2024-11-27T19:13:28.532Z] Total : 94.97 284.91 0.00 0.00 15107.29 332.69 117220.72 00:15:18.896 [2024-11-27 19:13:28.413872] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:18.896 [2024-11-27 19:13:28.413949] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:18.896 [2024-11-27 19:13:28.414040] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:18.896 [2024-11-27 19:13:28.414053] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:18.897 { 00:15:18.897 "results": [ 00:15:18.897 { 00:15:18.897 "job": "raid_bdev1", 00:15:18.897 "core_mask": "0x1", 00:15:18.897 "workload": "randrw", 00:15:18.897 "percentage": 50, 00:15:18.897 "status": "finished", 00:15:18.897 "queue_depth": 2, 00:15:18.897 "io_size": 3145728, 00:15:18.897 "runtime": 8.044728, 00:15:18.897 "iops": 94.96902816353766, 00:15:18.897 "mibps": 284.907084490613, 00:15:18.897 "io_failed": 0, 00:15:18.897 "io_timeout": 0, 00:15:18.897 "avg_latency_us": 15107.294012208784, 00:15:18.897 "min_latency_us": 332.6882096069869, 00:15:18.897 "max_latency_us": 117220.7231441048 00:15:18.897 } 00:15:18.897 ], 00:15:18.897 "core_count": 1 00:15:18.897 } 00:15:18.897 19:13:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.897 19:13:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:18.897 19:13:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:15:18.897 19:13:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.897 19:13:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:18.897 19:13:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.897 19:13:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:18.897 19:13:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:18.897 19:13:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:15:18.897 19:13:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:15:18.897 19:13:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:18.897 19:13:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:15:18.897 19:13:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:18.897 19:13:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:18.897 19:13:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:18.897 19:13:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:15:18.897 19:13:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:18.897 19:13:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:18.897 19:13:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:15:19.157 /dev/nbd0 00:15:19.157 19:13:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:19.157 19:13:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:19.157 19:13:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:19.157 19:13:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:15:19.157 19:13:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:19.157 19:13:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:19.157 19:13:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:19.157 19:13:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:15:19.157 19:13:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:19.157 19:13:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:19.157 19:13:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:19.157 1+0 records in 00:15:19.157 1+0 records out 00:15:19.157 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000353025 s, 11.6 MB/s 00:15:19.157 19:13:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:19.157 19:13:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:15:19.157 19:13:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:19.157 19:13:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:19.157 19:13:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:15:19.157 19:13:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:19.157 19:13:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:19.157 19:13:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:15:19.157 19:13:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:15:19.157 19:13:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@728 -- # continue 00:15:19.157 19:13:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:15:19.157 19:13:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:15:19.157 19:13:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:15:19.157 19:13:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:19.157 19:13:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:15:19.157 19:13:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:19.157 19:13:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:15:19.157 19:13:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:19.157 19:13:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:15:19.157 19:13:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:19.157 19:13:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:19.157 19:13:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:15:19.417 /dev/nbd1 00:15:19.417 19:13:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:19.417 19:13:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:19.417 19:13:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:19.417 19:13:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:15:19.417 19:13:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:19.417 19:13:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:19.417 19:13:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:19.417 19:13:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:15:19.417 19:13:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:19.417 19:13:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:19.417 19:13:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:19.417 1+0 records in 00:15:19.417 1+0 records out 00:15:19.417 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000416907 s, 9.8 MB/s 00:15:19.417 19:13:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:19.417 19:13:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:15:19.417 19:13:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:19.417 19:13:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:19.417 19:13:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:15:19.417 19:13:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:19.417 19:13:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:19.417 19:13:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:15:19.677 19:13:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:15:19.677 19:13:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:19.677 19:13:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:15:19.677 19:13:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:19.677 19:13:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:15:19.677 19:13:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:19.677 19:13:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:19.936 19:13:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:19.936 19:13:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:19.936 19:13:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:19.936 19:13:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:19.936 19:13:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:19.936 19:13:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:19.936 19:13:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:15:19.936 19:13:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:15:19.936 19:13:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:15:19.936 19:13:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:15:19.936 19:13:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:15:19.936 19:13:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:19.936 19:13:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:15:19.936 19:13:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:19.936 19:13:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:15:19.936 19:13:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:19.936 19:13:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:15:19.936 19:13:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:19.936 19:13:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:19.936 19:13:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:15:20.196 /dev/nbd1 00:15:20.196 19:13:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:20.196 19:13:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:20.196 19:13:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:20.196 19:13:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:15:20.196 19:13:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:20.196 19:13:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:20.196 19:13:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:20.196 19:13:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:15:20.196 19:13:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:20.196 19:13:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:20.196 19:13:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:20.196 1+0 records in 00:15:20.196 1+0 records out 00:15:20.196 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000424477 s, 9.6 MB/s 00:15:20.196 19:13:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:20.196 19:13:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:15:20.196 19:13:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:20.196 19:13:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:20.196 19:13:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:15:20.196 19:13:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:20.196 19:13:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:20.196 19:13:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:15:20.196 19:13:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:15:20.196 19:13:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:20.196 19:13:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:15:20.196 19:13:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:20.196 19:13:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:15:20.196 19:13:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:20.196 19:13:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:20.456 19:13:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:20.456 19:13:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:20.456 19:13:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:20.456 19:13:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:20.456 19:13:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:20.456 19:13:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:20.456 19:13:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:15:20.456 19:13:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:15:20.456 19:13:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:20.457 19:13:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:20.457 19:13:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:20.457 19:13:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:20.457 19:13:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:15:20.457 19:13:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:20.457 19:13:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:20.717 19:13:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:20.717 19:13:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:20.717 19:13:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:20.717 19:13:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:20.717 19:13:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:20.717 19:13:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:20.717 19:13:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:15:20.717 19:13:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:15:20.717 19:13:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:15:20.717 19:13:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 78829 00:15:20.717 19:13:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # '[' -z 78829 ']' 00:15:20.717 19:13:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # kill -0 78829 00:15:20.717 19:13:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # uname 00:15:20.717 19:13:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:20.717 19:13:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78829 00:15:20.717 killing process with pid 78829 00:15:20.717 Received shutdown signal, test time was about 9.834985 seconds 00:15:20.717 00:15:20.717 Latency(us) 00:15:20.717 [2024-11-27T19:13:30.353Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:20.717 [2024-11-27T19:13:30.353Z] =================================================================================================================== 00:15:20.717 [2024-11-27T19:13:30.353Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:20.717 19:13:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:20.717 19:13:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:20.717 19:13:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78829' 00:15:20.717 19:13:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@973 -- # kill 78829 00:15:20.717 [2024-11-27 19:13:30.179961] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:20.717 19:13:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@978 -- # wait 78829 00:15:20.978 [2024-11-27 19:13:30.579818] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:22.369 19:13:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:15:22.369 00:15:22.369 real 0m13.229s 00:15:22.369 user 0m16.635s 00:15:22.369 sys 0m1.857s 00:15:22.369 19:13:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:22.369 19:13:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:22.369 ************************************ 00:15:22.369 END TEST raid_rebuild_test_io 00:15:22.369 ************************************ 00:15:22.369 19:13:31 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true true 00:15:22.369 19:13:31 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:15:22.369 19:13:31 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:22.369 19:13:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:22.369 ************************************ 00:15:22.369 START TEST raid_rebuild_test_sb_io 00:15:22.369 ************************************ 00:15:22.369 19:13:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 true true true 00:15:22.369 19:13:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:15:22.369 19:13:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:15:22.369 19:13:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:15:22.369 19:13:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:15:22.369 19:13:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:22.369 19:13:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:22.369 19:13:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:22.369 19:13:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:22.369 19:13:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:22.369 19:13:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:22.369 19:13:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:22.369 19:13:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:22.369 19:13:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:22.369 19:13:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:15:22.369 19:13:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:22.369 19:13:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:22.369 19:13:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:15:22.369 19:13:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:22.369 19:13:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:22.369 19:13:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:22.369 19:13:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:22.369 19:13:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:22.369 19:13:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:22.369 19:13:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:22.369 19:13:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:22.369 19:13:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:22.369 19:13:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:15:22.369 19:13:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:15:22.369 19:13:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:15:22.369 19:13:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:15:22.369 19:13:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=79238 00:15:22.369 19:13:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 79238 00:15:22.369 19:13:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:22.369 19:13:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # '[' -z 79238 ']' 00:15:22.369 19:13:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:22.369 19:13:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:22.369 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:22.369 19:13:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:22.369 19:13:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:22.369 19:13:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:22.369 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:22.369 Zero copy mechanism will not be used. 00:15:22.369 [2024-11-27 19:13:31.893687] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:15:22.369 [2024-11-27 19:13:31.893816] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79238 ] 00:15:22.645 [2024-11-27 19:13:32.063502] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:22.645 [2024-11-27 19:13:32.168888] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:22.905 [2024-11-27 19:13:32.358329] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:22.905 [2024-11-27 19:13:32.358366] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:23.166 19:13:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:23.166 19:13:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # return 0 00:15:23.166 19:13:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:23.167 19:13:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:23.167 19:13:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.167 19:13:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:23.167 BaseBdev1_malloc 00:15:23.167 19:13:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.167 19:13:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:23.167 19:13:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.167 19:13:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:23.167 [2024-11-27 19:13:32.760090] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:23.167 [2024-11-27 19:13:32.760153] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:23.167 [2024-11-27 19:13:32.760175] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:23.167 [2024-11-27 19:13:32.760188] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:23.167 [2024-11-27 19:13:32.762271] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:23.167 [2024-11-27 19:13:32.762309] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:23.167 BaseBdev1 00:15:23.167 19:13:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.167 19:13:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:23.167 19:13:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:23.167 19:13:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.167 19:13:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:23.427 BaseBdev2_malloc 00:15:23.427 19:13:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.427 19:13:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:23.427 19:13:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.427 19:13:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:23.428 [2024-11-27 19:13:32.813715] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:23.428 [2024-11-27 19:13:32.813773] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:23.428 [2024-11-27 19:13:32.813796] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:23.428 [2024-11-27 19:13:32.813809] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:23.428 [2024-11-27 19:13:32.815806] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:23.428 [2024-11-27 19:13:32.815845] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:23.428 BaseBdev2 00:15:23.428 19:13:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.428 19:13:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:23.428 19:13:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:23.428 19:13:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.428 19:13:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:23.428 BaseBdev3_malloc 00:15:23.428 19:13:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.428 19:13:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:15:23.428 19:13:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.428 19:13:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:23.428 [2024-11-27 19:13:32.896047] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:15:23.428 [2024-11-27 19:13:32.896102] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:23.428 [2024-11-27 19:13:32.896123] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:23.428 [2024-11-27 19:13:32.896134] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:23.428 [2024-11-27 19:13:32.898117] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:23.428 [2024-11-27 19:13:32.898155] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:23.428 BaseBdev3 00:15:23.428 19:13:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.428 19:13:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:23.428 19:13:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:15:23.428 19:13:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.428 19:13:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:23.428 BaseBdev4_malloc 00:15:23.428 19:13:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.428 19:13:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:15:23.428 19:13:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.428 19:13:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:23.428 [2024-11-27 19:13:32.951784] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:15:23.428 [2024-11-27 19:13:32.951839] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:23.428 [2024-11-27 19:13:32.951858] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:23.428 [2024-11-27 19:13:32.951868] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:23.428 [2024-11-27 19:13:32.953820] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:23.428 [2024-11-27 19:13:32.953857] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:15:23.428 BaseBdev4 00:15:23.428 19:13:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.428 19:13:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:23.428 19:13:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.428 19:13:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:23.428 spare_malloc 00:15:23.428 19:13:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.428 19:13:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:23.428 19:13:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.428 19:13:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:23.428 spare_delay 00:15:23.428 19:13:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.428 19:13:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:23.428 19:13:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.428 19:13:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:23.428 [2024-11-27 19:13:33.018975] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:23.428 [2024-11-27 19:13:33.019024] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:23.428 [2024-11-27 19:13:33.019039] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:15:23.428 [2024-11-27 19:13:33.019050] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:23.428 [2024-11-27 19:13:33.021052] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:23.428 [2024-11-27 19:13:33.021090] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:23.428 spare 00:15:23.428 19:13:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.428 19:13:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:15:23.428 19:13:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.428 19:13:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:23.428 [2024-11-27 19:13:33.030999] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:23.428 [2024-11-27 19:13:33.032776] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:23.428 [2024-11-27 19:13:33.032852] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:23.428 [2024-11-27 19:13:33.032901] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:23.428 [2024-11-27 19:13:33.033071] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:23.428 [2024-11-27 19:13:33.033101] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:23.428 [2024-11-27 19:13:33.033350] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:15:23.428 [2024-11-27 19:13:33.033523] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:23.428 [2024-11-27 19:13:33.033541] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:23.428 [2024-11-27 19:13:33.033707] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:23.428 19:13:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.428 19:13:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:15:23.428 19:13:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:23.428 19:13:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:23.428 19:13:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:23.428 19:13:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:23.428 19:13:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:23.428 19:13:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:23.428 19:13:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:23.428 19:13:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:23.428 19:13:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:23.428 19:13:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:23.429 19:13:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.429 19:13:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:23.429 19:13:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:23.429 19:13:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.689 19:13:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:23.689 "name": "raid_bdev1", 00:15:23.689 "uuid": "1e9f020a-ec3a-4305-b991-edfccec20359", 00:15:23.689 "strip_size_kb": 0, 00:15:23.689 "state": "online", 00:15:23.689 "raid_level": "raid1", 00:15:23.689 "superblock": true, 00:15:23.689 "num_base_bdevs": 4, 00:15:23.689 "num_base_bdevs_discovered": 4, 00:15:23.689 "num_base_bdevs_operational": 4, 00:15:23.689 "base_bdevs_list": [ 00:15:23.689 { 00:15:23.689 "name": "BaseBdev1", 00:15:23.689 "uuid": "d84dd410-f96f-5271-85a6-49b43e2ebae2", 00:15:23.689 "is_configured": true, 00:15:23.689 "data_offset": 2048, 00:15:23.689 "data_size": 63488 00:15:23.690 }, 00:15:23.690 { 00:15:23.690 "name": "BaseBdev2", 00:15:23.690 "uuid": "1d7b5fa2-569c-530f-a3f1-ae080d656121", 00:15:23.690 "is_configured": true, 00:15:23.690 "data_offset": 2048, 00:15:23.690 "data_size": 63488 00:15:23.690 }, 00:15:23.690 { 00:15:23.690 "name": "BaseBdev3", 00:15:23.690 "uuid": "88c66e63-ba2e-5660-9981-974dca655ee2", 00:15:23.690 "is_configured": true, 00:15:23.690 "data_offset": 2048, 00:15:23.690 "data_size": 63488 00:15:23.690 }, 00:15:23.690 { 00:15:23.690 "name": "BaseBdev4", 00:15:23.690 "uuid": "0d389240-747d-585d-8ced-d2fa0b2a641b", 00:15:23.690 "is_configured": true, 00:15:23.690 "data_offset": 2048, 00:15:23.690 "data_size": 63488 00:15:23.690 } 00:15:23.690 ] 00:15:23.690 }' 00:15:23.690 19:13:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:23.690 19:13:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:23.950 19:13:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:23.950 19:13:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.950 19:13:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:23.950 19:13:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:23.950 [2024-11-27 19:13:33.530412] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:23.950 19:13:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.950 19:13:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:15:23.950 19:13:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:23.950 19:13:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:23.950 19:13:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.950 19:13:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:24.210 19:13:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.210 19:13:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:15:24.210 19:13:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:15:24.210 19:13:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:24.210 19:13:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:15:24.210 19:13:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.210 19:13:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:24.210 [2024-11-27 19:13:33.605956] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:24.210 19:13:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.210 19:13:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:24.210 19:13:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:24.210 19:13:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:24.210 19:13:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:24.210 19:13:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:24.210 19:13:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:24.210 19:13:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:24.210 19:13:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:24.211 19:13:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:24.211 19:13:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:24.211 19:13:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:24.211 19:13:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:24.211 19:13:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.211 19:13:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:24.211 19:13:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.211 19:13:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:24.211 "name": "raid_bdev1", 00:15:24.211 "uuid": "1e9f020a-ec3a-4305-b991-edfccec20359", 00:15:24.211 "strip_size_kb": 0, 00:15:24.211 "state": "online", 00:15:24.211 "raid_level": "raid1", 00:15:24.211 "superblock": true, 00:15:24.211 "num_base_bdevs": 4, 00:15:24.211 "num_base_bdevs_discovered": 3, 00:15:24.211 "num_base_bdevs_operational": 3, 00:15:24.211 "base_bdevs_list": [ 00:15:24.211 { 00:15:24.211 "name": null, 00:15:24.211 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:24.211 "is_configured": false, 00:15:24.211 "data_offset": 0, 00:15:24.211 "data_size": 63488 00:15:24.211 }, 00:15:24.211 { 00:15:24.211 "name": "BaseBdev2", 00:15:24.211 "uuid": "1d7b5fa2-569c-530f-a3f1-ae080d656121", 00:15:24.211 "is_configured": true, 00:15:24.211 "data_offset": 2048, 00:15:24.211 "data_size": 63488 00:15:24.211 }, 00:15:24.211 { 00:15:24.211 "name": "BaseBdev3", 00:15:24.211 "uuid": "88c66e63-ba2e-5660-9981-974dca655ee2", 00:15:24.211 "is_configured": true, 00:15:24.211 "data_offset": 2048, 00:15:24.211 "data_size": 63488 00:15:24.211 }, 00:15:24.211 { 00:15:24.211 "name": "BaseBdev4", 00:15:24.211 "uuid": "0d389240-747d-585d-8ced-d2fa0b2a641b", 00:15:24.211 "is_configured": true, 00:15:24.211 "data_offset": 2048, 00:15:24.211 "data_size": 63488 00:15:24.211 } 00:15:24.211 ] 00:15:24.211 }' 00:15:24.211 19:13:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:24.211 19:13:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:24.211 [2024-11-27 19:13:33.713479] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:15:24.211 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:24.211 Zero copy mechanism will not be used. 00:15:24.211 Running I/O for 60 seconds... 00:15:24.471 19:13:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:24.471 19:13:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.471 19:13:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:24.471 [2024-11-27 19:13:34.075039] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:24.730 19:13:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.730 19:13:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:24.730 [2024-11-27 19:13:34.164343] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:15:24.731 [2024-11-27 19:13:34.166320] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:24.731 [2024-11-27 19:13:34.287040] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:24.731 [2024-11-27 19:13:34.288355] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:24.990 [2024-11-27 19:13:34.505877] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:24.990 [2024-11-27 19:13:34.506696] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:25.250 166.00 IOPS, 498.00 MiB/s [2024-11-27T19:13:34.886Z] [2024-11-27 19:13:34.858622] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:15:25.511 [2024-11-27 19:13:35.004275] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:15:25.511 19:13:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:25.511 19:13:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:25.511 19:13:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:25.511 19:13:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:25.511 19:13:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:25.511 19:13:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:25.511 19:13:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.511 19:13:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:25.511 19:13:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:25.771 19:13:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.771 19:13:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:25.771 "name": "raid_bdev1", 00:15:25.771 "uuid": "1e9f020a-ec3a-4305-b991-edfccec20359", 00:15:25.771 "strip_size_kb": 0, 00:15:25.771 "state": "online", 00:15:25.771 "raid_level": "raid1", 00:15:25.771 "superblock": true, 00:15:25.771 "num_base_bdevs": 4, 00:15:25.771 "num_base_bdevs_discovered": 4, 00:15:25.771 "num_base_bdevs_operational": 4, 00:15:25.771 "process": { 00:15:25.771 "type": "rebuild", 00:15:25.771 "target": "spare", 00:15:25.771 "progress": { 00:15:25.771 "blocks": 10240, 00:15:25.771 "percent": 16 00:15:25.771 } 00:15:25.771 }, 00:15:25.771 "base_bdevs_list": [ 00:15:25.771 { 00:15:25.771 "name": "spare", 00:15:25.771 "uuid": "449294f0-0cd7-537a-b738-b6ebe1957abd", 00:15:25.771 "is_configured": true, 00:15:25.771 "data_offset": 2048, 00:15:25.771 "data_size": 63488 00:15:25.771 }, 00:15:25.771 { 00:15:25.771 "name": "BaseBdev2", 00:15:25.771 "uuid": "1d7b5fa2-569c-530f-a3f1-ae080d656121", 00:15:25.771 "is_configured": true, 00:15:25.771 "data_offset": 2048, 00:15:25.771 "data_size": 63488 00:15:25.771 }, 00:15:25.771 { 00:15:25.771 "name": "BaseBdev3", 00:15:25.771 "uuid": "88c66e63-ba2e-5660-9981-974dca655ee2", 00:15:25.771 "is_configured": true, 00:15:25.771 "data_offset": 2048, 00:15:25.771 "data_size": 63488 00:15:25.771 }, 00:15:25.771 { 00:15:25.771 "name": "BaseBdev4", 00:15:25.772 "uuid": "0d389240-747d-585d-8ced-d2fa0b2a641b", 00:15:25.772 "is_configured": true, 00:15:25.772 "data_offset": 2048, 00:15:25.772 "data_size": 63488 00:15:25.772 } 00:15:25.772 ] 00:15:25.772 }' 00:15:25.772 19:13:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:25.772 19:13:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:25.772 19:13:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:25.772 19:13:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:25.772 19:13:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:25.772 19:13:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.772 19:13:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:25.772 [2024-11-27 19:13:35.264998] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:15:25.772 [2024-11-27 19:13:35.273504] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:25.772 [2024-11-27 19:13:35.369100] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:25.772 [2024-11-27 19:13:35.388659] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:25.772 [2024-11-27 19:13:35.388735] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:25.772 [2024-11-27 19:13:35.388749] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:26.032 [2024-11-27 19:13:35.427453] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:15:26.032 19:13:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.032 19:13:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:26.032 19:13:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:26.032 19:13:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:26.032 19:13:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:26.032 19:13:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:26.032 19:13:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:26.032 19:13:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:26.032 19:13:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:26.032 19:13:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:26.032 19:13:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:26.032 19:13:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:26.032 19:13:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:26.032 19:13:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.032 19:13:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:26.032 19:13:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.032 19:13:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:26.032 "name": "raid_bdev1", 00:15:26.032 "uuid": "1e9f020a-ec3a-4305-b991-edfccec20359", 00:15:26.032 "strip_size_kb": 0, 00:15:26.032 "state": "online", 00:15:26.032 "raid_level": "raid1", 00:15:26.032 "superblock": true, 00:15:26.032 "num_base_bdevs": 4, 00:15:26.032 "num_base_bdevs_discovered": 3, 00:15:26.032 "num_base_bdevs_operational": 3, 00:15:26.032 "base_bdevs_list": [ 00:15:26.032 { 00:15:26.032 "name": null, 00:15:26.033 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:26.033 "is_configured": false, 00:15:26.033 "data_offset": 0, 00:15:26.033 "data_size": 63488 00:15:26.033 }, 00:15:26.033 { 00:15:26.033 "name": "BaseBdev2", 00:15:26.033 "uuid": "1d7b5fa2-569c-530f-a3f1-ae080d656121", 00:15:26.033 "is_configured": true, 00:15:26.033 "data_offset": 2048, 00:15:26.033 "data_size": 63488 00:15:26.033 }, 00:15:26.033 { 00:15:26.033 "name": "BaseBdev3", 00:15:26.033 "uuid": "88c66e63-ba2e-5660-9981-974dca655ee2", 00:15:26.033 "is_configured": true, 00:15:26.033 "data_offset": 2048, 00:15:26.033 "data_size": 63488 00:15:26.033 }, 00:15:26.033 { 00:15:26.033 "name": "BaseBdev4", 00:15:26.033 "uuid": "0d389240-747d-585d-8ced-d2fa0b2a641b", 00:15:26.033 "is_configured": true, 00:15:26.033 "data_offset": 2048, 00:15:26.033 "data_size": 63488 00:15:26.033 } 00:15:26.033 ] 00:15:26.033 }' 00:15:26.033 19:13:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:26.033 19:13:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:26.293 150.50 IOPS, 451.50 MiB/s [2024-11-27T19:13:35.929Z] 19:13:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:26.293 19:13:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:26.293 19:13:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:26.293 19:13:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:26.293 19:13:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:26.293 19:13:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:26.293 19:13:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:26.293 19:13:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.293 19:13:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:26.293 19:13:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.293 19:13:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:26.293 "name": "raid_bdev1", 00:15:26.293 "uuid": "1e9f020a-ec3a-4305-b991-edfccec20359", 00:15:26.293 "strip_size_kb": 0, 00:15:26.293 "state": "online", 00:15:26.293 "raid_level": "raid1", 00:15:26.293 "superblock": true, 00:15:26.293 "num_base_bdevs": 4, 00:15:26.293 "num_base_bdevs_discovered": 3, 00:15:26.293 "num_base_bdevs_operational": 3, 00:15:26.293 "base_bdevs_list": [ 00:15:26.293 { 00:15:26.293 "name": null, 00:15:26.293 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:26.293 "is_configured": false, 00:15:26.293 "data_offset": 0, 00:15:26.293 "data_size": 63488 00:15:26.293 }, 00:15:26.293 { 00:15:26.293 "name": "BaseBdev2", 00:15:26.293 "uuid": "1d7b5fa2-569c-530f-a3f1-ae080d656121", 00:15:26.293 "is_configured": true, 00:15:26.293 "data_offset": 2048, 00:15:26.293 "data_size": 63488 00:15:26.293 }, 00:15:26.293 { 00:15:26.293 "name": "BaseBdev3", 00:15:26.293 "uuid": "88c66e63-ba2e-5660-9981-974dca655ee2", 00:15:26.293 "is_configured": true, 00:15:26.293 "data_offset": 2048, 00:15:26.293 "data_size": 63488 00:15:26.293 }, 00:15:26.293 { 00:15:26.293 "name": "BaseBdev4", 00:15:26.293 "uuid": "0d389240-747d-585d-8ced-d2fa0b2a641b", 00:15:26.293 "is_configured": true, 00:15:26.293 "data_offset": 2048, 00:15:26.293 "data_size": 63488 00:15:26.293 } 00:15:26.293 ] 00:15:26.293 }' 00:15:26.293 19:13:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:26.553 19:13:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:26.553 19:13:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:26.553 19:13:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:26.553 19:13:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:26.553 19:13:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.553 19:13:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:26.553 [2024-11-27 19:13:36.023478] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:26.553 19:13:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.553 19:13:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:26.553 [2024-11-27 19:13:36.092518] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:15:26.553 [2024-11-27 19:13:36.094503] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:26.812 [2024-11-27 19:13:36.216784] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:26.812 [2024-11-27 19:13:36.218232] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:26.812 [2024-11-27 19:13:36.434491] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:26.812 [2024-11-27 19:13:36.435202] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:27.383 155.67 IOPS, 467.00 MiB/s [2024-11-27T19:13:37.019Z] [2024-11-27 19:13:36.919999] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:15:27.643 19:13:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:27.643 19:13:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:27.643 19:13:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:27.643 19:13:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:27.643 19:13:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:27.643 19:13:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.643 19:13:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:27.643 19:13:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.643 19:13:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:27.643 19:13:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.643 19:13:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:27.643 "name": "raid_bdev1", 00:15:27.644 "uuid": "1e9f020a-ec3a-4305-b991-edfccec20359", 00:15:27.644 "strip_size_kb": 0, 00:15:27.644 "state": "online", 00:15:27.644 "raid_level": "raid1", 00:15:27.644 "superblock": true, 00:15:27.644 "num_base_bdevs": 4, 00:15:27.644 "num_base_bdevs_discovered": 4, 00:15:27.644 "num_base_bdevs_operational": 4, 00:15:27.644 "process": { 00:15:27.644 "type": "rebuild", 00:15:27.644 "target": "spare", 00:15:27.644 "progress": { 00:15:27.644 "blocks": 12288, 00:15:27.644 "percent": 19 00:15:27.644 } 00:15:27.644 }, 00:15:27.644 "base_bdevs_list": [ 00:15:27.644 { 00:15:27.644 "name": "spare", 00:15:27.644 "uuid": "449294f0-0cd7-537a-b738-b6ebe1957abd", 00:15:27.644 "is_configured": true, 00:15:27.644 "data_offset": 2048, 00:15:27.644 "data_size": 63488 00:15:27.644 }, 00:15:27.644 { 00:15:27.644 "name": "BaseBdev2", 00:15:27.644 "uuid": "1d7b5fa2-569c-530f-a3f1-ae080d656121", 00:15:27.644 "is_configured": true, 00:15:27.644 "data_offset": 2048, 00:15:27.644 "data_size": 63488 00:15:27.644 }, 00:15:27.644 { 00:15:27.644 "name": "BaseBdev3", 00:15:27.644 "uuid": "88c66e63-ba2e-5660-9981-974dca655ee2", 00:15:27.644 "is_configured": true, 00:15:27.644 "data_offset": 2048, 00:15:27.644 "data_size": 63488 00:15:27.644 }, 00:15:27.644 { 00:15:27.644 "name": "BaseBdev4", 00:15:27.644 "uuid": "0d389240-747d-585d-8ced-d2fa0b2a641b", 00:15:27.644 "is_configured": true, 00:15:27.644 "data_offset": 2048, 00:15:27.644 "data_size": 63488 00:15:27.644 } 00:15:27.644 ] 00:15:27.644 }' 00:15:27.644 19:13:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:27.644 [2024-11-27 19:13:37.159758] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:15:27.644 19:13:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:27.644 19:13:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:27.644 19:13:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:27.644 19:13:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:15:27.644 19:13:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:15:27.644 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:15:27.644 19:13:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:15:27.644 19:13:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:15:27.644 19:13:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:15:27.644 19:13:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:27.644 19:13:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.644 19:13:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:27.644 [2024-11-27 19:13:37.234561] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:27.904 [2024-11-27 19:13:37.382877] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:15:28.164 [2024-11-27 19:13:37.585380] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:15:28.164 [2024-11-27 19:13:37.585419] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:15:28.164 19:13:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.164 19:13:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:15:28.164 19:13:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:15:28.164 19:13:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:28.164 19:13:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:28.164 19:13:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:28.164 19:13:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:28.164 19:13:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:28.164 19:13:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:28.164 19:13:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:28.164 19:13:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.164 19:13:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:28.164 19:13:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.164 19:13:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:28.164 "name": "raid_bdev1", 00:15:28.164 "uuid": "1e9f020a-ec3a-4305-b991-edfccec20359", 00:15:28.164 "strip_size_kb": 0, 00:15:28.164 "state": "online", 00:15:28.164 "raid_level": "raid1", 00:15:28.164 "superblock": true, 00:15:28.164 "num_base_bdevs": 4, 00:15:28.164 "num_base_bdevs_discovered": 3, 00:15:28.164 "num_base_bdevs_operational": 3, 00:15:28.164 "process": { 00:15:28.164 "type": "rebuild", 00:15:28.164 "target": "spare", 00:15:28.164 "progress": { 00:15:28.164 "blocks": 16384, 00:15:28.164 "percent": 25 00:15:28.164 } 00:15:28.164 }, 00:15:28.164 "base_bdevs_list": [ 00:15:28.164 { 00:15:28.164 "name": "spare", 00:15:28.165 "uuid": "449294f0-0cd7-537a-b738-b6ebe1957abd", 00:15:28.165 "is_configured": true, 00:15:28.165 "data_offset": 2048, 00:15:28.165 "data_size": 63488 00:15:28.165 }, 00:15:28.165 { 00:15:28.165 "name": null, 00:15:28.165 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:28.165 "is_configured": false, 00:15:28.165 "data_offset": 0, 00:15:28.165 "data_size": 63488 00:15:28.165 }, 00:15:28.165 { 00:15:28.165 "name": "BaseBdev3", 00:15:28.165 "uuid": "88c66e63-ba2e-5660-9981-974dca655ee2", 00:15:28.165 "is_configured": true, 00:15:28.165 "data_offset": 2048, 00:15:28.165 "data_size": 63488 00:15:28.165 }, 00:15:28.165 { 00:15:28.165 "name": "BaseBdev4", 00:15:28.165 "uuid": "0d389240-747d-585d-8ced-d2fa0b2a641b", 00:15:28.165 "is_configured": true, 00:15:28.165 "data_offset": 2048, 00:15:28.165 "data_size": 63488 00:15:28.165 } 00:15:28.165 ] 00:15:28.165 }' 00:15:28.165 19:13:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:28.165 19:13:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:28.165 19:13:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:28.165 134.00 IOPS, 402.00 MiB/s [2024-11-27T19:13:37.801Z] 19:13:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:28.165 19:13:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=499 00:15:28.165 19:13:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:28.165 19:13:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:28.165 19:13:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:28.165 19:13:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:28.165 19:13:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:28.165 19:13:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:28.165 19:13:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:28.165 19:13:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:28.165 19:13:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.165 19:13:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:28.165 19:13:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.165 19:13:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:28.165 "name": "raid_bdev1", 00:15:28.165 "uuid": "1e9f020a-ec3a-4305-b991-edfccec20359", 00:15:28.165 "strip_size_kb": 0, 00:15:28.165 "state": "online", 00:15:28.165 "raid_level": "raid1", 00:15:28.165 "superblock": true, 00:15:28.165 "num_base_bdevs": 4, 00:15:28.165 "num_base_bdevs_discovered": 3, 00:15:28.165 "num_base_bdevs_operational": 3, 00:15:28.165 "process": { 00:15:28.165 "type": "rebuild", 00:15:28.165 "target": "spare", 00:15:28.165 "progress": { 00:15:28.165 "blocks": 18432, 00:15:28.165 "percent": 29 00:15:28.165 } 00:15:28.165 }, 00:15:28.165 "base_bdevs_list": [ 00:15:28.165 { 00:15:28.165 "name": "spare", 00:15:28.165 "uuid": "449294f0-0cd7-537a-b738-b6ebe1957abd", 00:15:28.165 "is_configured": true, 00:15:28.165 "data_offset": 2048, 00:15:28.165 "data_size": 63488 00:15:28.165 }, 00:15:28.165 { 00:15:28.165 "name": null, 00:15:28.165 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:28.165 "is_configured": false, 00:15:28.165 "data_offset": 0, 00:15:28.165 "data_size": 63488 00:15:28.165 }, 00:15:28.165 { 00:15:28.165 "name": "BaseBdev3", 00:15:28.165 "uuid": "88c66e63-ba2e-5660-9981-974dca655ee2", 00:15:28.165 "is_configured": true, 00:15:28.165 "data_offset": 2048, 00:15:28.165 "data_size": 63488 00:15:28.165 }, 00:15:28.165 { 00:15:28.165 "name": "BaseBdev4", 00:15:28.165 "uuid": "0d389240-747d-585d-8ced-d2fa0b2a641b", 00:15:28.165 "is_configured": true, 00:15:28.165 "data_offset": 2048, 00:15:28.165 "data_size": 63488 00:15:28.165 } 00:15:28.165 ] 00:15:28.165 }' 00:15:28.165 19:13:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:28.425 19:13:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:28.425 19:13:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:28.425 19:13:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:28.425 19:13:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:28.425 [2024-11-27 19:13:37.911101] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:15:28.685 [2024-11-27 19:13:38.144223] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:15:28.685 [2024-11-27 19:13:38.253302] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:15:28.945 [2024-11-27 19:13:38.489277] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:15:29.464 118.00 IOPS, 354.00 MiB/s [2024-11-27T19:13:39.100Z] 19:13:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:29.464 19:13:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:29.464 19:13:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:29.464 19:13:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:29.464 19:13:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:29.464 19:13:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:29.464 19:13:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.464 19:13:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:29.464 19:13:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.464 19:13:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:29.464 19:13:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.464 19:13:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:29.464 "name": "raid_bdev1", 00:15:29.464 "uuid": "1e9f020a-ec3a-4305-b991-edfccec20359", 00:15:29.464 "strip_size_kb": 0, 00:15:29.464 "state": "online", 00:15:29.464 "raid_level": "raid1", 00:15:29.464 "superblock": true, 00:15:29.464 "num_base_bdevs": 4, 00:15:29.464 "num_base_bdevs_discovered": 3, 00:15:29.464 "num_base_bdevs_operational": 3, 00:15:29.464 "process": { 00:15:29.464 "type": "rebuild", 00:15:29.464 "target": "spare", 00:15:29.464 "progress": { 00:15:29.464 "blocks": 36864, 00:15:29.464 "percent": 58 00:15:29.464 } 00:15:29.464 }, 00:15:29.464 "base_bdevs_list": [ 00:15:29.464 { 00:15:29.464 "name": "spare", 00:15:29.464 "uuid": "449294f0-0cd7-537a-b738-b6ebe1957abd", 00:15:29.464 "is_configured": true, 00:15:29.464 "data_offset": 2048, 00:15:29.464 "data_size": 63488 00:15:29.464 }, 00:15:29.464 { 00:15:29.464 "name": null, 00:15:29.464 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:29.464 "is_configured": false, 00:15:29.464 "data_offset": 0, 00:15:29.464 "data_size": 63488 00:15:29.464 }, 00:15:29.464 { 00:15:29.464 "name": "BaseBdev3", 00:15:29.465 "uuid": "88c66e63-ba2e-5660-9981-974dca655ee2", 00:15:29.465 "is_configured": true, 00:15:29.465 "data_offset": 2048, 00:15:29.465 "data_size": 63488 00:15:29.465 }, 00:15:29.465 { 00:15:29.465 "name": "BaseBdev4", 00:15:29.465 "uuid": "0d389240-747d-585d-8ced-d2fa0b2a641b", 00:15:29.465 "is_configured": true, 00:15:29.465 "data_offset": 2048, 00:15:29.465 "data_size": 63488 00:15:29.465 } 00:15:29.465 ] 00:15:29.465 }' 00:15:29.465 [2024-11-27 19:13:38.906717] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:15:29.465 19:13:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:29.465 19:13:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:29.465 19:13:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:29.465 19:13:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:29.465 19:13:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:29.727 [2024-11-27 19:13:39.326826] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:15:29.727 [2024-11-27 19:13:39.327854] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:15:29.989 [2024-11-27 19:13:39.530227] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:15:30.508 105.83 IOPS, 317.50 MiB/s [2024-11-27T19:13:40.144Z] 19:13:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:30.508 19:13:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:30.508 19:13:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:30.508 19:13:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:30.508 19:13:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:30.508 19:13:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:30.508 19:13:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:30.508 19:13:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:30.508 19:13:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.508 19:13:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:30.508 19:13:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.508 19:13:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:30.508 "name": "raid_bdev1", 00:15:30.508 "uuid": "1e9f020a-ec3a-4305-b991-edfccec20359", 00:15:30.508 "strip_size_kb": 0, 00:15:30.508 "state": "online", 00:15:30.508 "raid_level": "raid1", 00:15:30.508 "superblock": true, 00:15:30.508 "num_base_bdevs": 4, 00:15:30.508 "num_base_bdevs_discovered": 3, 00:15:30.508 "num_base_bdevs_operational": 3, 00:15:30.508 "process": { 00:15:30.508 "type": "rebuild", 00:15:30.509 "target": "spare", 00:15:30.509 "progress": { 00:15:30.509 "blocks": 55296, 00:15:30.509 "percent": 87 00:15:30.509 } 00:15:30.509 }, 00:15:30.509 "base_bdevs_list": [ 00:15:30.509 { 00:15:30.509 "name": "spare", 00:15:30.509 "uuid": "449294f0-0cd7-537a-b738-b6ebe1957abd", 00:15:30.509 "is_configured": true, 00:15:30.509 "data_offset": 2048, 00:15:30.509 "data_size": 63488 00:15:30.509 }, 00:15:30.509 { 00:15:30.509 "name": null, 00:15:30.509 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:30.509 "is_configured": false, 00:15:30.509 "data_offset": 0, 00:15:30.509 "data_size": 63488 00:15:30.509 }, 00:15:30.509 { 00:15:30.509 "name": "BaseBdev3", 00:15:30.509 "uuid": "88c66e63-ba2e-5660-9981-974dca655ee2", 00:15:30.509 "is_configured": true, 00:15:30.509 "data_offset": 2048, 00:15:30.509 "data_size": 63488 00:15:30.509 }, 00:15:30.509 { 00:15:30.509 "name": "BaseBdev4", 00:15:30.509 "uuid": "0d389240-747d-585d-8ced-d2fa0b2a641b", 00:15:30.509 "is_configured": true, 00:15:30.509 "data_offset": 2048, 00:15:30.509 "data_size": 63488 00:15:30.509 } 00:15:30.509 ] 00:15:30.509 }' 00:15:30.509 19:13:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:30.509 19:13:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:30.509 19:13:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:30.769 19:13:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:30.769 19:13:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:30.769 [2024-11-27 19:13:40.393632] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:31.028 [2024-11-27 19:13:40.499089] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:31.028 [2024-11-27 19:13:40.501967] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:31.547 94.86 IOPS, 284.57 MiB/s [2024-11-27T19:13:41.183Z] 19:13:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:31.547 19:13:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:31.547 19:13:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:31.547 19:13:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:31.547 19:13:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:31.547 19:13:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:31.547 19:13:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:31.547 19:13:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:31.547 19:13:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.547 19:13:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:31.807 19:13:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.807 19:13:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:31.807 "name": "raid_bdev1", 00:15:31.807 "uuid": "1e9f020a-ec3a-4305-b991-edfccec20359", 00:15:31.807 "strip_size_kb": 0, 00:15:31.807 "state": "online", 00:15:31.807 "raid_level": "raid1", 00:15:31.807 "superblock": true, 00:15:31.807 "num_base_bdevs": 4, 00:15:31.807 "num_base_bdevs_discovered": 3, 00:15:31.807 "num_base_bdevs_operational": 3, 00:15:31.807 "base_bdevs_list": [ 00:15:31.807 { 00:15:31.807 "name": "spare", 00:15:31.807 "uuid": "449294f0-0cd7-537a-b738-b6ebe1957abd", 00:15:31.807 "is_configured": true, 00:15:31.807 "data_offset": 2048, 00:15:31.807 "data_size": 63488 00:15:31.807 }, 00:15:31.807 { 00:15:31.807 "name": null, 00:15:31.807 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:31.807 "is_configured": false, 00:15:31.807 "data_offset": 0, 00:15:31.807 "data_size": 63488 00:15:31.807 }, 00:15:31.807 { 00:15:31.807 "name": "BaseBdev3", 00:15:31.807 "uuid": "88c66e63-ba2e-5660-9981-974dca655ee2", 00:15:31.807 "is_configured": true, 00:15:31.807 "data_offset": 2048, 00:15:31.807 "data_size": 63488 00:15:31.807 }, 00:15:31.807 { 00:15:31.807 "name": "BaseBdev4", 00:15:31.807 "uuid": "0d389240-747d-585d-8ced-d2fa0b2a641b", 00:15:31.807 "is_configured": true, 00:15:31.807 "data_offset": 2048, 00:15:31.807 "data_size": 63488 00:15:31.807 } 00:15:31.807 ] 00:15:31.807 }' 00:15:31.807 19:13:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:31.807 19:13:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:31.807 19:13:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:31.808 19:13:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:31.808 19:13:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:15:31.808 19:13:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:31.808 19:13:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:31.808 19:13:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:31.808 19:13:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:31.808 19:13:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:31.808 19:13:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:31.808 19:13:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:31.808 19:13:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.808 19:13:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:31.808 19:13:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.808 19:13:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:31.808 "name": "raid_bdev1", 00:15:31.808 "uuid": "1e9f020a-ec3a-4305-b991-edfccec20359", 00:15:31.808 "strip_size_kb": 0, 00:15:31.808 "state": "online", 00:15:31.808 "raid_level": "raid1", 00:15:31.808 "superblock": true, 00:15:31.808 "num_base_bdevs": 4, 00:15:31.808 "num_base_bdevs_discovered": 3, 00:15:31.808 "num_base_bdevs_operational": 3, 00:15:31.808 "base_bdevs_list": [ 00:15:31.808 { 00:15:31.808 "name": "spare", 00:15:31.808 "uuid": "449294f0-0cd7-537a-b738-b6ebe1957abd", 00:15:31.808 "is_configured": true, 00:15:31.808 "data_offset": 2048, 00:15:31.808 "data_size": 63488 00:15:31.808 }, 00:15:31.808 { 00:15:31.808 "name": null, 00:15:31.808 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:31.808 "is_configured": false, 00:15:31.808 "data_offset": 0, 00:15:31.808 "data_size": 63488 00:15:31.808 }, 00:15:31.808 { 00:15:31.808 "name": "BaseBdev3", 00:15:31.808 "uuid": "88c66e63-ba2e-5660-9981-974dca655ee2", 00:15:31.808 "is_configured": true, 00:15:31.808 "data_offset": 2048, 00:15:31.808 "data_size": 63488 00:15:31.808 }, 00:15:31.808 { 00:15:31.808 "name": "BaseBdev4", 00:15:31.808 "uuid": "0d389240-747d-585d-8ced-d2fa0b2a641b", 00:15:31.808 "is_configured": true, 00:15:31.808 "data_offset": 2048, 00:15:31.808 "data_size": 63488 00:15:31.808 } 00:15:31.808 ] 00:15:31.808 }' 00:15:31.808 19:13:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:31.808 19:13:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:31.808 19:13:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:32.068 19:13:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:32.068 19:13:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:32.068 19:13:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:32.068 19:13:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:32.068 19:13:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:32.068 19:13:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:32.068 19:13:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:32.068 19:13:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:32.068 19:13:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:32.068 19:13:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:32.068 19:13:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:32.068 19:13:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:32.068 19:13:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:32.068 19:13:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.068 19:13:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:32.068 19:13:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.068 19:13:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:32.068 "name": "raid_bdev1", 00:15:32.068 "uuid": "1e9f020a-ec3a-4305-b991-edfccec20359", 00:15:32.068 "strip_size_kb": 0, 00:15:32.068 "state": "online", 00:15:32.068 "raid_level": "raid1", 00:15:32.068 "superblock": true, 00:15:32.068 "num_base_bdevs": 4, 00:15:32.068 "num_base_bdevs_discovered": 3, 00:15:32.068 "num_base_bdevs_operational": 3, 00:15:32.068 "base_bdevs_list": [ 00:15:32.068 { 00:15:32.068 "name": "spare", 00:15:32.068 "uuid": "449294f0-0cd7-537a-b738-b6ebe1957abd", 00:15:32.068 "is_configured": true, 00:15:32.068 "data_offset": 2048, 00:15:32.068 "data_size": 63488 00:15:32.068 }, 00:15:32.068 { 00:15:32.068 "name": null, 00:15:32.068 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:32.068 "is_configured": false, 00:15:32.068 "data_offset": 0, 00:15:32.068 "data_size": 63488 00:15:32.068 }, 00:15:32.068 { 00:15:32.068 "name": "BaseBdev3", 00:15:32.068 "uuid": "88c66e63-ba2e-5660-9981-974dca655ee2", 00:15:32.068 "is_configured": true, 00:15:32.068 "data_offset": 2048, 00:15:32.068 "data_size": 63488 00:15:32.068 }, 00:15:32.068 { 00:15:32.068 "name": "BaseBdev4", 00:15:32.068 "uuid": "0d389240-747d-585d-8ced-d2fa0b2a641b", 00:15:32.068 "is_configured": true, 00:15:32.068 "data_offset": 2048, 00:15:32.068 "data_size": 63488 00:15:32.068 } 00:15:32.068 ] 00:15:32.068 }' 00:15:32.068 19:13:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:32.068 19:13:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:32.328 88.12 IOPS, 264.38 MiB/s [2024-11-27T19:13:41.964Z] 19:13:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:32.328 19:13:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.328 19:13:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:32.328 [2024-11-27 19:13:41.919986] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:32.328 [2024-11-27 19:13:41.920092] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:32.588 00:15:32.588 Latency(us) 00:15:32.588 [2024-11-27T19:13:42.224Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:32.588 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:15:32.588 raid_bdev1 : 8.32 85.45 256.36 0.00 0.00 16412.71 334.48 109894.43 00:15:32.588 [2024-11-27T19:13:42.224Z] =================================================================================================================== 00:15:32.588 [2024-11-27T19:13:42.224Z] Total : 85.45 256.36 0.00 0.00 16412.71 334.48 109894.43 00:15:32.588 [2024-11-27 19:13:42.039823] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:32.588 [2024-11-27 19:13:42.039925] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:32.588 [2024-11-27 19:13:42.040058] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:32.588 [2024-11-27 19:13:42.040108] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:32.588 { 00:15:32.588 "results": [ 00:15:32.588 { 00:15:32.588 "job": "raid_bdev1", 00:15:32.588 "core_mask": "0x1", 00:15:32.588 "workload": "randrw", 00:15:32.588 "percentage": 50, 00:15:32.588 "status": "finished", 00:15:32.588 "queue_depth": 2, 00:15:32.588 "io_size": 3145728, 00:15:32.588 "runtime": 8.320264, 00:15:32.588 "iops": 85.45401924746619, 00:15:32.588 "mibps": 256.3620577423986, 00:15:32.588 "io_failed": 0, 00:15:32.588 "io_timeout": 0, 00:15:32.588 "avg_latency_us": 16412.71393633421, 00:15:32.588 "min_latency_us": 334.4768558951965, 00:15:32.588 "max_latency_us": 109894.42794759825 00:15:32.588 } 00:15:32.588 ], 00:15:32.588 "core_count": 1 00:15:32.588 } 00:15:32.588 19:13:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.588 19:13:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:15:32.588 19:13:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:32.588 19:13:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.588 19:13:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:32.588 19:13:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.588 19:13:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:32.588 19:13:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:32.588 19:13:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:15:32.588 19:13:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:15:32.588 19:13:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:32.589 19:13:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:15:32.589 19:13:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:32.589 19:13:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:32.589 19:13:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:32.589 19:13:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:15:32.589 19:13:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:32.589 19:13:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:32.589 19:13:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:15:32.849 /dev/nbd0 00:15:32.849 19:13:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:32.849 19:13:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:32.849 19:13:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:32.849 19:13:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:15:32.849 19:13:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:32.849 19:13:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:32.849 19:13:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:32.849 19:13:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:15:32.849 19:13:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:32.849 19:13:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:32.849 19:13:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:32.849 1+0 records in 00:15:32.849 1+0 records out 00:15:32.849 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000599274 s, 6.8 MB/s 00:15:32.849 19:13:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:32.849 19:13:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:15:32.849 19:13:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:32.849 19:13:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:32.849 19:13:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:15:32.849 19:13:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:32.849 19:13:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:32.849 19:13:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:15:32.849 19:13:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:15:32.849 19:13:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@728 -- # continue 00:15:32.849 19:13:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:15:32.849 19:13:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:15:32.849 19:13:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:15:32.849 19:13:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:32.849 19:13:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:15:32.849 19:13:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:32.849 19:13:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:15:32.849 19:13:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:32.849 19:13:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:15:32.849 19:13:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:32.849 19:13:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:32.849 19:13:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:15:33.109 /dev/nbd1 00:15:33.109 19:13:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:33.109 19:13:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:33.109 19:13:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:33.109 19:13:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:15:33.109 19:13:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:33.109 19:13:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:33.109 19:13:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:33.109 19:13:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:15:33.109 19:13:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:33.109 19:13:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:33.109 19:13:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:33.109 1+0 records in 00:15:33.109 1+0 records out 00:15:33.109 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000519386 s, 7.9 MB/s 00:15:33.109 19:13:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:33.109 19:13:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:15:33.109 19:13:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:33.109 19:13:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:33.109 19:13:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:15:33.109 19:13:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:33.109 19:13:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:33.109 19:13:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:15:33.422 19:13:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:15:33.422 19:13:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:33.422 19:13:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:15:33.422 19:13:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:33.422 19:13:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:15:33.422 19:13:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:33.422 19:13:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:33.422 19:13:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:33.696 19:13:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:33.696 19:13:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:33.696 19:13:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:33.696 19:13:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:33.696 19:13:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:33.696 19:13:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:15:33.696 19:13:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:15:33.696 19:13:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:15:33.696 19:13:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:15:33.696 19:13:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:15:33.697 19:13:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:33.697 19:13:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:15:33.697 19:13:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:33.697 19:13:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:15:33.697 19:13:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:33.697 19:13:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:15:33.697 19:13:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:33.697 19:13:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:33.697 19:13:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:15:33.697 /dev/nbd1 00:15:33.697 19:13:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:33.697 19:13:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:33.697 19:13:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:33.697 19:13:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:15:33.697 19:13:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:33.697 19:13:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:33.697 19:13:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:33.697 19:13:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:15:33.697 19:13:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:33.697 19:13:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:33.697 19:13:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:33.697 1+0 records in 00:15:33.697 1+0 records out 00:15:33.697 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000379283 s, 10.8 MB/s 00:15:33.697 19:13:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:33.697 19:13:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:15:33.697 19:13:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:33.697 19:13:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:33.697 19:13:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:15:33.697 19:13:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:33.697 19:13:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:33.697 19:13:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:15:33.957 19:13:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:15:33.957 19:13:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:33.957 19:13:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:15:33.957 19:13:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:33.957 19:13:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:15:33.957 19:13:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:33.957 19:13:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:34.217 19:13:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:34.218 19:13:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:34.218 19:13:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:34.218 19:13:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:34.218 19:13:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:34.218 19:13:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:34.218 19:13:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:15:34.218 19:13:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:15:34.218 19:13:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:34.218 19:13:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:34.218 19:13:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:34.218 19:13:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:34.218 19:13:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:15:34.218 19:13:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:34.218 19:13:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:34.218 19:13:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:34.218 19:13:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:34.218 19:13:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:34.218 19:13:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:34.218 19:13:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:34.218 19:13:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:34.218 19:13:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:15:34.218 19:13:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:15:34.218 19:13:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:15:34.218 19:13:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:15:34.218 19:13:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.218 19:13:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:34.218 19:13:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.218 19:13:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:34.218 19:13:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.218 19:13:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:34.218 [2024-11-27 19:13:43.848606] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:34.218 [2024-11-27 19:13:43.848669] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:34.218 [2024-11-27 19:13:43.848701] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:15:34.218 [2024-11-27 19:13:43.848714] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:34.218 [2024-11-27 19:13:43.851025] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:34.218 [2024-11-27 19:13:43.851063] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:34.218 [2024-11-27 19:13:43.851156] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:34.218 [2024-11-27 19:13:43.851208] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:34.218 [2024-11-27 19:13:43.851364] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:34.218 [2024-11-27 19:13:43.851458] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:34.477 spare 00:15:34.477 19:13:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.477 19:13:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:15:34.477 19:13:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.477 19:13:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:34.477 [2024-11-27 19:13:43.951374] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:15:34.477 [2024-11-27 19:13:43.951444] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:34.477 [2024-11-27 19:13:43.951842] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037160 00:15:34.477 [2024-11-27 19:13:43.952085] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:15:34.477 [2024-11-27 19:13:43.952101] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:15:34.477 [2024-11-27 19:13:43.952295] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:34.477 19:13:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.477 19:13:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:34.477 19:13:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:34.477 19:13:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:34.477 19:13:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:34.477 19:13:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:34.477 19:13:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:34.477 19:13:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:34.477 19:13:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:34.477 19:13:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:34.477 19:13:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:34.477 19:13:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:34.477 19:13:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:34.477 19:13:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.477 19:13:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:34.477 19:13:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.477 19:13:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:34.477 "name": "raid_bdev1", 00:15:34.477 "uuid": "1e9f020a-ec3a-4305-b991-edfccec20359", 00:15:34.477 "strip_size_kb": 0, 00:15:34.477 "state": "online", 00:15:34.477 "raid_level": "raid1", 00:15:34.477 "superblock": true, 00:15:34.477 "num_base_bdevs": 4, 00:15:34.477 "num_base_bdevs_discovered": 3, 00:15:34.477 "num_base_bdevs_operational": 3, 00:15:34.477 "base_bdevs_list": [ 00:15:34.477 { 00:15:34.477 "name": "spare", 00:15:34.477 "uuid": "449294f0-0cd7-537a-b738-b6ebe1957abd", 00:15:34.477 "is_configured": true, 00:15:34.477 "data_offset": 2048, 00:15:34.477 "data_size": 63488 00:15:34.477 }, 00:15:34.477 { 00:15:34.477 "name": null, 00:15:34.478 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.478 "is_configured": false, 00:15:34.478 "data_offset": 2048, 00:15:34.478 "data_size": 63488 00:15:34.478 }, 00:15:34.478 { 00:15:34.478 "name": "BaseBdev3", 00:15:34.478 "uuid": "88c66e63-ba2e-5660-9981-974dca655ee2", 00:15:34.478 "is_configured": true, 00:15:34.478 "data_offset": 2048, 00:15:34.478 "data_size": 63488 00:15:34.478 }, 00:15:34.478 { 00:15:34.478 "name": "BaseBdev4", 00:15:34.478 "uuid": "0d389240-747d-585d-8ced-d2fa0b2a641b", 00:15:34.478 "is_configured": true, 00:15:34.478 "data_offset": 2048, 00:15:34.478 "data_size": 63488 00:15:34.478 } 00:15:34.478 ] 00:15:34.478 }' 00:15:34.478 19:13:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:34.478 19:13:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:35.048 19:13:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:35.048 19:13:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:35.048 19:13:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:35.048 19:13:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:35.048 19:13:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:35.048 19:13:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:35.048 19:13:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:35.048 19:13:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.048 19:13:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:35.048 19:13:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.048 19:13:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:35.048 "name": "raid_bdev1", 00:15:35.048 "uuid": "1e9f020a-ec3a-4305-b991-edfccec20359", 00:15:35.048 "strip_size_kb": 0, 00:15:35.048 "state": "online", 00:15:35.048 "raid_level": "raid1", 00:15:35.048 "superblock": true, 00:15:35.048 "num_base_bdevs": 4, 00:15:35.048 "num_base_bdevs_discovered": 3, 00:15:35.048 "num_base_bdevs_operational": 3, 00:15:35.048 "base_bdevs_list": [ 00:15:35.048 { 00:15:35.048 "name": "spare", 00:15:35.048 "uuid": "449294f0-0cd7-537a-b738-b6ebe1957abd", 00:15:35.048 "is_configured": true, 00:15:35.048 "data_offset": 2048, 00:15:35.048 "data_size": 63488 00:15:35.048 }, 00:15:35.048 { 00:15:35.048 "name": null, 00:15:35.048 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:35.048 "is_configured": false, 00:15:35.048 "data_offset": 2048, 00:15:35.048 "data_size": 63488 00:15:35.048 }, 00:15:35.048 { 00:15:35.048 "name": "BaseBdev3", 00:15:35.048 "uuid": "88c66e63-ba2e-5660-9981-974dca655ee2", 00:15:35.048 "is_configured": true, 00:15:35.048 "data_offset": 2048, 00:15:35.048 "data_size": 63488 00:15:35.048 }, 00:15:35.048 { 00:15:35.048 "name": "BaseBdev4", 00:15:35.048 "uuid": "0d389240-747d-585d-8ced-d2fa0b2a641b", 00:15:35.048 "is_configured": true, 00:15:35.048 "data_offset": 2048, 00:15:35.048 "data_size": 63488 00:15:35.048 } 00:15:35.048 ] 00:15:35.048 }' 00:15:35.048 19:13:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:35.048 19:13:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:35.048 19:13:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:35.048 19:13:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:35.048 19:13:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:35.048 19:13:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:15:35.048 19:13:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.048 19:13:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:35.048 19:13:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.048 19:13:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:15:35.048 19:13:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:35.048 19:13:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.048 19:13:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:35.048 [2024-11-27 19:13:44.635556] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:35.048 19:13:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.048 19:13:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:35.048 19:13:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:35.048 19:13:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:35.048 19:13:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:35.048 19:13:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:35.048 19:13:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:35.048 19:13:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:35.048 19:13:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:35.048 19:13:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:35.048 19:13:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:35.048 19:13:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:35.048 19:13:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.048 19:13:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:35.048 19:13:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:35.048 19:13:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.308 19:13:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:35.308 "name": "raid_bdev1", 00:15:35.308 "uuid": "1e9f020a-ec3a-4305-b991-edfccec20359", 00:15:35.308 "strip_size_kb": 0, 00:15:35.308 "state": "online", 00:15:35.308 "raid_level": "raid1", 00:15:35.308 "superblock": true, 00:15:35.308 "num_base_bdevs": 4, 00:15:35.308 "num_base_bdevs_discovered": 2, 00:15:35.308 "num_base_bdevs_operational": 2, 00:15:35.308 "base_bdevs_list": [ 00:15:35.308 { 00:15:35.308 "name": null, 00:15:35.308 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:35.308 "is_configured": false, 00:15:35.308 "data_offset": 0, 00:15:35.308 "data_size": 63488 00:15:35.308 }, 00:15:35.308 { 00:15:35.308 "name": null, 00:15:35.308 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:35.308 "is_configured": false, 00:15:35.308 "data_offset": 2048, 00:15:35.308 "data_size": 63488 00:15:35.308 }, 00:15:35.308 { 00:15:35.308 "name": "BaseBdev3", 00:15:35.308 "uuid": "88c66e63-ba2e-5660-9981-974dca655ee2", 00:15:35.308 "is_configured": true, 00:15:35.308 "data_offset": 2048, 00:15:35.308 "data_size": 63488 00:15:35.308 }, 00:15:35.308 { 00:15:35.308 "name": "BaseBdev4", 00:15:35.308 "uuid": "0d389240-747d-585d-8ced-d2fa0b2a641b", 00:15:35.308 "is_configured": true, 00:15:35.308 "data_offset": 2048, 00:15:35.308 "data_size": 63488 00:15:35.308 } 00:15:35.308 ] 00:15:35.308 }' 00:15:35.308 19:13:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:35.308 19:13:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:35.568 19:13:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:35.568 19:13:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.568 19:13:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:35.568 [2024-11-27 19:13:45.074881] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:35.568 [2024-11-27 19:13:45.075081] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:15:35.568 [2024-11-27 19:13:45.075106] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:35.568 [2024-11-27 19:13:45.075140] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:35.568 [2024-11-27 19:13:45.089597] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037230 00:15:35.568 19:13:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.568 [2024-11-27 19:13:45.091508] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:35.568 19:13:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:15:36.507 19:13:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:36.507 19:13:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:36.507 19:13:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:36.507 19:13:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:36.507 19:13:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:36.507 19:13:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:36.507 19:13:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:36.507 19:13:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.507 19:13:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:36.507 19:13:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.768 19:13:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:36.768 "name": "raid_bdev1", 00:15:36.768 "uuid": "1e9f020a-ec3a-4305-b991-edfccec20359", 00:15:36.768 "strip_size_kb": 0, 00:15:36.768 "state": "online", 00:15:36.768 "raid_level": "raid1", 00:15:36.768 "superblock": true, 00:15:36.768 "num_base_bdevs": 4, 00:15:36.768 "num_base_bdevs_discovered": 3, 00:15:36.768 "num_base_bdevs_operational": 3, 00:15:36.768 "process": { 00:15:36.768 "type": "rebuild", 00:15:36.768 "target": "spare", 00:15:36.768 "progress": { 00:15:36.768 "blocks": 20480, 00:15:36.768 "percent": 32 00:15:36.768 } 00:15:36.768 }, 00:15:36.768 "base_bdevs_list": [ 00:15:36.768 { 00:15:36.768 "name": "spare", 00:15:36.768 "uuid": "449294f0-0cd7-537a-b738-b6ebe1957abd", 00:15:36.768 "is_configured": true, 00:15:36.768 "data_offset": 2048, 00:15:36.768 "data_size": 63488 00:15:36.768 }, 00:15:36.768 { 00:15:36.768 "name": null, 00:15:36.768 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:36.768 "is_configured": false, 00:15:36.768 "data_offset": 2048, 00:15:36.768 "data_size": 63488 00:15:36.768 }, 00:15:36.768 { 00:15:36.768 "name": "BaseBdev3", 00:15:36.768 "uuid": "88c66e63-ba2e-5660-9981-974dca655ee2", 00:15:36.768 "is_configured": true, 00:15:36.768 "data_offset": 2048, 00:15:36.768 "data_size": 63488 00:15:36.768 }, 00:15:36.768 { 00:15:36.768 "name": "BaseBdev4", 00:15:36.768 "uuid": "0d389240-747d-585d-8ced-d2fa0b2a641b", 00:15:36.768 "is_configured": true, 00:15:36.768 "data_offset": 2048, 00:15:36.768 "data_size": 63488 00:15:36.768 } 00:15:36.768 ] 00:15:36.768 }' 00:15:36.768 19:13:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:36.768 19:13:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:36.768 19:13:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:36.768 19:13:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:36.768 19:13:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:15:36.768 19:13:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.768 19:13:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:36.768 [2024-11-27 19:13:46.207755] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:36.768 [2024-11-27 19:13:46.297333] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:36.768 [2024-11-27 19:13:46.297425] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:36.768 [2024-11-27 19:13:46.297439] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:36.768 [2024-11-27 19:13:46.297448] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:36.768 19:13:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.768 19:13:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:36.768 19:13:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:36.768 19:13:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:36.768 19:13:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:36.768 19:13:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:36.768 19:13:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:36.768 19:13:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:36.768 19:13:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:36.768 19:13:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:36.768 19:13:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:36.768 19:13:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:36.768 19:13:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:36.768 19:13:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.768 19:13:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:36.768 19:13:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.768 19:13:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:36.768 "name": "raid_bdev1", 00:15:36.768 "uuid": "1e9f020a-ec3a-4305-b991-edfccec20359", 00:15:36.768 "strip_size_kb": 0, 00:15:36.768 "state": "online", 00:15:36.768 "raid_level": "raid1", 00:15:36.768 "superblock": true, 00:15:36.768 "num_base_bdevs": 4, 00:15:36.768 "num_base_bdevs_discovered": 2, 00:15:36.768 "num_base_bdevs_operational": 2, 00:15:36.768 "base_bdevs_list": [ 00:15:36.768 { 00:15:36.768 "name": null, 00:15:36.768 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:36.768 "is_configured": false, 00:15:36.768 "data_offset": 0, 00:15:36.768 "data_size": 63488 00:15:36.768 }, 00:15:36.768 { 00:15:36.768 "name": null, 00:15:36.768 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:36.768 "is_configured": false, 00:15:36.768 "data_offset": 2048, 00:15:36.768 "data_size": 63488 00:15:36.768 }, 00:15:36.768 { 00:15:36.768 "name": "BaseBdev3", 00:15:36.768 "uuid": "88c66e63-ba2e-5660-9981-974dca655ee2", 00:15:36.768 "is_configured": true, 00:15:36.768 "data_offset": 2048, 00:15:36.768 "data_size": 63488 00:15:36.768 }, 00:15:36.768 { 00:15:36.768 "name": "BaseBdev4", 00:15:36.768 "uuid": "0d389240-747d-585d-8ced-d2fa0b2a641b", 00:15:36.768 "is_configured": true, 00:15:36.768 "data_offset": 2048, 00:15:36.768 "data_size": 63488 00:15:36.768 } 00:15:36.768 ] 00:15:36.768 }' 00:15:36.768 19:13:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:36.768 19:13:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:37.338 19:13:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:37.338 19:13:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.338 19:13:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:37.339 [2024-11-27 19:13:46.776391] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:37.339 [2024-11-27 19:13:46.776466] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:37.339 [2024-11-27 19:13:46.776496] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:15:37.339 [2024-11-27 19:13:46.776508] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:37.339 [2024-11-27 19:13:46.777011] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:37.339 [2024-11-27 19:13:46.777033] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:37.339 [2024-11-27 19:13:46.777133] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:37.339 [2024-11-27 19:13:46.777148] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:15:37.339 [2024-11-27 19:13:46.777157] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:37.339 [2024-11-27 19:13:46.777182] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:37.339 [2024-11-27 19:13:46.790963] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037300 00:15:37.339 spare 00:15:37.339 19:13:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.339 [2024-11-27 19:13:46.792772] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:37.339 19:13:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:15:38.279 19:13:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:38.279 19:13:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:38.279 19:13:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:38.279 19:13:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:38.279 19:13:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:38.279 19:13:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:38.279 19:13:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.279 19:13:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:38.279 19:13:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:38.279 19:13:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.279 19:13:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:38.279 "name": "raid_bdev1", 00:15:38.279 "uuid": "1e9f020a-ec3a-4305-b991-edfccec20359", 00:15:38.279 "strip_size_kb": 0, 00:15:38.279 "state": "online", 00:15:38.279 "raid_level": "raid1", 00:15:38.279 "superblock": true, 00:15:38.279 "num_base_bdevs": 4, 00:15:38.279 "num_base_bdevs_discovered": 3, 00:15:38.279 "num_base_bdevs_operational": 3, 00:15:38.279 "process": { 00:15:38.279 "type": "rebuild", 00:15:38.279 "target": "spare", 00:15:38.279 "progress": { 00:15:38.279 "blocks": 20480, 00:15:38.279 "percent": 32 00:15:38.279 } 00:15:38.279 }, 00:15:38.279 "base_bdevs_list": [ 00:15:38.279 { 00:15:38.279 "name": "spare", 00:15:38.279 "uuid": "449294f0-0cd7-537a-b738-b6ebe1957abd", 00:15:38.279 "is_configured": true, 00:15:38.279 "data_offset": 2048, 00:15:38.279 "data_size": 63488 00:15:38.279 }, 00:15:38.279 { 00:15:38.279 "name": null, 00:15:38.279 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:38.279 "is_configured": false, 00:15:38.279 "data_offset": 2048, 00:15:38.279 "data_size": 63488 00:15:38.279 }, 00:15:38.279 { 00:15:38.279 "name": "BaseBdev3", 00:15:38.279 "uuid": "88c66e63-ba2e-5660-9981-974dca655ee2", 00:15:38.279 "is_configured": true, 00:15:38.279 "data_offset": 2048, 00:15:38.279 "data_size": 63488 00:15:38.279 }, 00:15:38.279 { 00:15:38.279 "name": "BaseBdev4", 00:15:38.279 "uuid": "0d389240-747d-585d-8ced-d2fa0b2a641b", 00:15:38.279 "is_configured": true, 00:15:38.279 "data_offset": 2048, 00:15:38.279 "data_size": 63488 00:15:38.279 } 00:15:38.279 ] 00:15:38.279 }' 00:15:38.279 19:13:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:38.279 19:13:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:38.279 19:13:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:38.539 19:13:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:38.539 19:13:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:15:38.539 19:13:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.539 19:13:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:38.539 [2024-11-27 19:13:47.940962] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:38.539 [2024-11-27 19:13:47.998581] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:38.539 [2024-11-27 19:13:47.998704] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:38.539 [2024-11-27 19:13:47.998739] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:38.539 [2024-11-27 19:13:47.998747] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:38.540 19:13:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.540 19:13:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:38.540 19:13:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:38.540 19:13:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:38.540 19:13:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:38.540 19:13:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:38.540 19:13:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:38.540 19:13:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:38.540 19:13:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:38.540 19:13:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:38.540 19:13:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:38.540 19:13:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:38.540 19:13:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:38.540 19:13:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.540 19:13:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:38.540 19:13:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.540 19:13:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:38.540 "name": "raid_bdev1", 00:15:38.540 "uuid": "1e9f020a-ec3a-4305-b991-edfccec20359", 00:15:38.540 "strip_size_kb": 0, 00:15:38.540 "state": "online", 00:15:38.540 "raid_level": "raid1", 00:15:38.540 "superblock": true, 00:15:38.540 "num_base_bdevs": 4, 00:15:38.540 "num_base_bdevs_discovered": 2, 00:15:38.540 "num_base_bdevs_operational": 2, 00:15:38.540 "base_bdevs_list": [ 00:15:38.540 { 00:15:38.540 "name": null, 00:15:38.540 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:38.540 "is_configured": false, 00:15:38.540 "data_offset": 0, 00:15:38.540 "data_size": 63488 00:15:38.540 }, 00:15:38.540 { 00:15:38.540 "name": null, 00:15:38.540 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:38.540 "is_configured": false, 00:15:38.540 "data_offset": 2048, 00:15:38.540 "data_size": 63488 00:15:38.540 }, 00:15:38.540 { 00:15:38.540 "name": "BaseBdev3", 00:15:38.540 "uuid": "88c66e63-ba2e-5660-9981-974dca655ee2", 00:15:38.540 "is_configured": true, 00:15:38.540 "data_offset": 2048, 00:15:38.540 "data_size": 63488 00:15:38.540 }, 00:15:38.540 { 00:15:38.540 "name": "BaseBdev4", 00:15:38.540 "uuid": "0d389240-747d-585d-8ced-d2fa0b2a641b", 00:15:38.540 "is_configured": true, 00:15:38.540 "data_offset": 2048, 00:15:38.540 "data_size": 63488 00:15:38.540 } 00:15:38.540 ] 00:15:38.540 }' 00:15:38.540 19:13:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:38.540 19:13:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:39.111 19:13:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:39.111 19:13:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:39.111 19:13:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:39.111 19:13:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:39.111 19:13:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:39.111 19:13:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.111 19:13:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.111 19:13:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:39.111 19:13:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:39.111 19:13:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.111 19:13:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:39.111 "name": "raid_bdev1", 00:15:39.111 "uuid": "1e9f020a-ec3a-4305-b991-edfccec20359", 00:15:39.111 "strip_size_kb": 0, 00:15:39.111 "state": "online", 00:15:39.111 "raid_level": "raid1", 00:15:39.111 "superblock": true, 00:15:39.111 "num_base_bdevs": 4, 00:15:39.111 "num_base_bdevs_discovered": 2, 00:15:39.111 "num_base_bdevs_operational": 2, 00:15:39.111 "base_bdevs_list": [ 00:15:39.111 { 00:15:39.111 "name": null, 00:15:39.111 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:39.111 "is_configured": false, 00:15:39.111 "data_offset": 0, 00:15:39.111 "data_size": 63488 00:15:39.111 }, 00:15:39.111 { 00:15:39.111 "name": null, 00:15:39.111 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:39.111 "is_configured": false, 00:15:39.111 "data_offset": 2048, 00:15:39.111 "data_size": 63488 00:15:39.111 }, 00:15:39.111 { 00:15:39.111 "name": "BaseBdev3", 00:15:39.111 "uuid": "88c66e63-ba2e-5660-9981-974dca655ee2", 00:15:39.111 "is_configured": true, 00:15:39.111 "data_offset": 2048, 00:15:39.111 "data_size": 63488 00:15:39.111 }, 00:15:39.111 { 00:15:39.111 "name": "BaseBdev4", 00:15:39.111 "uuid": "0d389240-747d-585d-8ced-d2fa0b2a641b", 00:15:39.111 "is_configured": true, 00:15:39.111 "data_offset": 2048, 00:15:39.111 "data_size": 63488 00:15:39.111 } 00:15:39.111 ] 00:15:39.111 }' 00:15:39.111 19:13:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:39.111 19:13:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:39.111 19:13:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:39.111 19:13:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:39.111 19:13:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:15:39.111 19:13:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.111 19:13:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:39.111 19:13:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.111 19:13:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:39.111 19:13:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.111 19:13:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:39.111 [2024-11-27 19:13:48.589944] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:39.111 [2024-11-27 19:13:48.590004] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:39.111 [2024-11-27 19:13:48.590025] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cc80 00:15:39.111 [2024-11-27 19:13:48.590034] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:39.111 [2024-11-27 19:13:48.590508] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:39.111 [2024-11-27 19:13:48.590525] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:39.111 [2024-11-27 19:13:48.590609] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:15:39.111 [2024-11-27 19:13:48.590623] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:15:39.111 [2024-11-27 19:13:48.590635] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:39.111 [2024-11-27 19:13:48.590646] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:15:39.111 BaseBdev1 00:15:39.111 19:13:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.111 19:13:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:15:40.052 19:13:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:40.052 19:13:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:40.052 19:13:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:40.052 19:13:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:40.052 19:13:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:40.052 19:13:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:40.052 19:13:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:40.052 19:13:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:40.052 19:13:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:40.052 19:13:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:40.052 19:13:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:40.052 19:13:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:40.052 19:13:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.052 19:13:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:40.052 19:13:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.052 19:13:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:40.052 "name": "raid_bdev1", 00:15:40.052 "uuid": "1e9f020a-ec3a-4305-b991-edfccec20359", 00:15:40.052 "strip_size_kb": 0, 00:15:40.052 "state": "online", 00:15:40.052 "raid_level": "raid1", 00:15:40.052 "superblock": true, 00:15:40.052 "num_base_bdevs": 4, 00:15:40.052 "num_base_bdevs_discovered": 2, 00:15:40.052 "num_base_bdevs_operational": 2, 00:15:40.052 "base_bdevs_list": [ 00:15:40.052 { 00:15:40.052 "name": null, 00:15:40.052 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:40.052 "is_configured": false, 00:15:40.052 "data_offset": 0, 00:15:40.052 "data_size": 63488 00:15:40.052 }, 00:15:40.052 { 00:15:40.052 "name": null, 00:15:40.052 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:40.052 "is_configured": false, 00:15:40.052 "data_offset": 2048, 00:15:40.052 "data_size": 63488 00:15:40.052 }, 00:15:40.052 { 00:15:40.052 "name": "BaseBdev3", 00:15:40.052 "uuid": "88c66e63-ba2e-5660-9981-974dca655ee2", 00:15:40.052 "is_configured": true, 00:15:40.052 "data_offset": 2048, 00:15:40.052 "data_size": 63488 00:15:40.052 }, 00:15:40.052 { 00:15:40.052 "name": "BaseBdev4", 00:15:40.052 "uuid": "0d389240-747d-585d-8ced-d2fa0b2a641b", 00:15:40.052 "is_configured": true, 00:15:40.052 "data_offset": 2048, 00:15:40.052 "data_size": 63488 00:15:40.052 } 00:15:40.052 ] 00:15:40.052 }' 00:15:40.052 19:13:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:40.052 19:13:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:40.633 19:13:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:40.633 19:13:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:40.633 19:13:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:40.633 19:13:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:40.633 19:13:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:40.633 19:13:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:40.633 19:13:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.633 19:13:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:40.633 19:13:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:40.633 19:13:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.633 19:13:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:40.633 "name": "raid_bdev1", 00:15:40.633 "uuid": "1e9f020a-ec3a-4305-b991-edfccec20359", 00:15:40.633 "strip_size_kb": 0, 00:15:40.633 "state": "online", 00:15:40.633 "raid_level": "raid1", 00:15:40.633 "superblock": true, 00:15:40.633 "num_base_bdevs": 4, 00:15:40.633 "num_base_bdevs_discovered": 2, 00:15:40.633 "num_base_bdevs_operational": 2, 00:15:40.633 "base_bdevs_list": [ 00:15:40.633 { 00:15:40.633 "name": null, 00:15:40.633 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:40.633 "is_configured": false, 00:15:40.633 "data_offset": 0, 00:15:40.633 "data_size": 63488 00:15:40.633 }, 00:15:40.633 { 00:15:40.633 "name": null, 00:15:40.633 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:40.633 "is_configured": false, 00:15:40.633 "data_offset": 2048, 00:15:40.633 "data_size": 63488 00:15:40.633 }, 00:15:40.633 { 00:15:40.633 "name": "BaseBdev3", 00:15:40.633 "uuid": "88c66e63-ba2e-5660-9981-974dca655ee2", 00:15:40.633 "is_configured": true, 00:15:40.633 "data_offset": 2048, 00:15:40.633 "data_size": 63488 00:15:40.633 }, 00:15:40.633 { 00:15:40.633 "name": "BaseBdev4", 00:15:40.633 "uuid": "0d389240-747d-585d-8ced-d2fa0b2a641b", 00:15:40.633 "is_configured": true, 00:15:40.633 "data_offset": 2048, 00:15:40.633 "data_size": 63488 00:15:40.633 } 00:15:40.633 ] 00:15:40.633 }' 00:15:40.634 19:13:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:40.634 19:13:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:40.634 19:13:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:40.634 19:13:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:40.634 19:13:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:40.634 19:13:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # local es=0 00:15:40.634 19:13:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:40.634 19:13:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:15:40.634 19:13:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:40.634 19:13:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:15:40.634 19:13:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:40.634 19:13:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:40.634 19:13:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.634 19:13:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:40.634 [2024-11-27 19:13:50.219524] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:40.634 [2024-11-27 19:13:50.219780] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:15:40.634 [2024-11-27 19:13:50.219806] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:40.634 request: 00:15:40.634 { 00:15:40.634 "base_bdev": "BaseBdev1", 00:15:40.634 "raid_bdev": "raid_bdev1", 00:15:40.634 "method": "bdev_raid_add_base_bdev", 00:15:40.634 "req_id": 1 00:15:40.634 } 00:15:40.634 Got JSON-RPC error response 00:15:40.634 response: 00:15:40.634 { 00:15:40.634 "code": -22, 00:15:40.634 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:15:40.634 } 00:15:40.634 19:13:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:15:40.634 19:13:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # es=1 00:15:40.634 19:13:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:40.634 19:13:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:40.634 19:13:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:40.634 19:13:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:15:42.012 19:13:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:42.012 19:13:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:42.012 19:13:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:42.012 19:13:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:42.012 19:13:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:42.012 19:13:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:42.012 19:13:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:42.012 19:13:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:42.012 19:13:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:42.012 19:13:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:42.012 19:13:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:42.012 19:13:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.012 19:13:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:42.012 19:13:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:42.012 19:13:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.012 19:13:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:42.012 "name": "raid_bdev1", 00:15:42.012 "uuid": "1e9f020a-ec3a-4305-b991-edfccec20359", 00:15:42.012 "strip_size_kb": 0, 00:15:42.012 "state": "online", 00:15:42.012 "raid_level": "raid1", 00:15:42.012 "superblock": true, 00:15:42.012 "num_base_bdevs": 4, 00:15:42.012 "num_base_bdevs_discovered": 2, 00:15:42.012 "num_base_bdevs_operational": 2, 00:15:42.012 "base_bdevs_list": [ 00:15:42.012 { 00:15:42.012 "name": null, 00:15:42.012 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:42.012 "is_configured": false, 00:15:42.012 "data_offset": 0, 00:15:42.012 "data_size": 63488 00:15:42.012 }, 00:15:42.012 { 00:15:42.012 "name": null, 00:15:42.012 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:42.012 "is_configured": false, 00:15:42.012 "data_offset": 2048, 00:15:42.012 "data_size": 63488 00:15:42.012 }, 00:15:42.012 { 00:15:42.012 "name": "BaseBdev3", 00:15:42.012 "uuid": "88c66e63-ba2e-5660-9981-974dca655ee2", 00:15:42.012 "is_configured": true, 00:15:42.012 "data_offset": 2048, 00:15:42.012 "data_size": 63488 00:15:42.012 }, 00:15:42.012 { 00:15:42.012 "name": "BaseBdev4", 00:15:42.012 "uuid": "0d389240-747d-585d-8ced-d2fa0b2a641b", 00:15:42.012 "is_configured": true, 00:15:42.012 "data_offset": 2048, 00:15:42.012 "data_size": 63488 00:15:42.012 } 00:15:42.012 ] 00:15:42.012 }' 00:15:42.012 19:13:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:42.012 19:13:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:42.271 19:13:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:42.271 19:13:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:42.271 19:13:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:42.271 19:13:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:42.271 19:13:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:42.271 19:13:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:42.271 19:13:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:42.271 19:13:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.271 19:13:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:42.271 19:13:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.271 19:13:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:42.271 "name": "raid_bdev1", 00:15:42.271 "uuid": "1e9f020a-ec3a-4305-b991-edfccec20359", 00:15:42.271 "strip_size_kb": 0, 00:15:42.271 "state": "online", 00:15:42.271 "raid_level": "raid1", 00:15:42.271 "superblock": true, 00:15:42.271 "num_base_bdevs": 4, 00:15:42.271 "num_base_bdevs_discovered": 2, 00:15:42.271 "num_base_bdevs_operational": 2, 00:15:42.271 "base_bdevs_list": [ 00:15:42.271 { 00:15:42.271 "name": null, 00:15:42.271 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:42.271 "is_configured": false, 00:15:42.271 "data_offset": 0, 00:15:42.271 "data_size": 63488 00:15:42.271 }, 00:15:42.271 { 00:15:42.271 "name": null, 00:15:42.271 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:42.271 "is_configured": false, 00:15:42.271 "data_offset": 2048, 00:15:42.271 "data_size": 63488 00:15:42.271 }, 00:15:42.271 { 00:15:42.271 "name": "BaseBdev3", 00:15:42.271 "uuid": "88c66e63-ba2e-5660-9981-974dca655ee2", 00:15:42.271 "is_configured": true, 00:15:42.271 "data_offset": 2048, 00:15:42.271 "data_size": 63488 00:15:42.271 }, 00:15:42.271 { 00:15:42.271 "name": "BaseBdev4", 00:15:42.271 "uuid": "0d389240-747d-585d-8ced-d2fa0b2a641b", 00:15:42.271 "is_configured": true, 00:15:42.271 "data_offset": 2048, 00:15:42.271 "data_size": 63488 00:15:42.271 } 00:15:42.271 ] 00:15:42.271 }' 00:15:42.271 19:13:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:42.271 19:13:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:42.271 19:13:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:42.271 19:13:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:42.271 19:13:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 79238 00:15:42.271 19:13:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # '[' -z 79238 ']' 00:15:42.271 19:13:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # kill -0 79238 00:15:42.271 19:13:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # uname 00:15:42.271 19:13:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:42.272 19:13:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79238 00:15:42.272 killing process with pid 79238 00:15:42.272 Received shutdown signal, test time was about 18.222585 seconds 00:15:42.272 00:15:42.272 Latency(us) 00:15:42.272 [2024-11-27T19:13:51.908Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:42.272 [2024-11-27T19:13:51.908Z] =================================================================================================================== 00:15:42.272 [2024-11-27T19:13:51.908Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:42.272 19:13:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:42.272 19:13:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:42.272 19:13:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79238' 00:15:42.272 19:13:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@973 -- # kill 79238 00:15:42.272 [2024-11-27 19:13:51.903034] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:42.272 19:13:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@978 -- # wait 79238 00:15:42.272 [2024-11-27 19:13:51.903193] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:42.272 [2024-11-27 19:13:51.903277] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:42.272 [2024-11-27 19:13:51.903296] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:15:42.840 [2024-11-27 19:13:52.354203] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:44.219 19:13:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:15:44.219 00:15:44.219 real 0m21.805s 00:15:44.219 user 0m28.305s 00:15:44.219 sys 0m2.761s 00:15:44.219 19:13:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:44.219 19:13:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:44.219 ************************************ 00:15:44.219 END TEST raid_rebuild_test_sb_io 00:15:44.219 ************************************ 00:15:44.219 19:13:53 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:15:44.219 19:13:53 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 3 false 00:15:44.219 19:13:53 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:44.219 19:13:53 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:44.219 19:13:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:44.219 ************************************ 00:15:44.219 START TEST raid5f_state_function_test 00:15:44.219 ************************************ 00:15:44.219 19:13:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 3 false 00:15:44.219 19:13:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:15:44.219 19:13:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:15:44.219 19:13:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:15:44.219 19:13:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:15:44.219 19:13:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:15:44.219 19:13:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:44.219 19:13:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:15:44.219 19:13:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:44.219 19:13:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:44.219 19:13:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:15:44.219 19:13:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:44.219 19:13:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:44.219 19:13:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:15:44.219 19:13:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:44.219 19:13:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:44.219 19:13:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:15:44.219 19:13:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:15:44.219 19:13:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:15:44.219 19:13:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:15:44.219 19:13:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:15:44.219 19:13:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:15:44.219 19:13:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:15:44.219 19:13:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:15:44.219 19:13:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:15:44.219 19:13:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:15:44.219 19:13:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:15:44.219 19:13:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=79960 00:15:44.219 19:13:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:15:44.219 19:13:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 79960' 00:15:44.219 Process raid pid: 79960 00:15:44.219 19:13:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 79960 00:15:44.219 19:13:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 79960 ']' 00:15:44.219 19:13:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:44.219 19:13:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:44.219 19:13:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:44.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:44.219 19:13:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:44.219 19:13:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.219 [2024-11-27 19:13:53.777448] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:15:44.219 [2024-11-27 19:13:53.777653] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:44.479 [2024-11-27 19:13:53.958754] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:44.479 [2024-11-27 19:13:54.087844] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:44.740 [2024-11-27 19:13:54.326886] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:44.740 [2024-11-27 19:13:54.326934] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:45.055 19:13:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:45.055 19:13:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:15:45.055 19:13:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:45.055 19:13:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.055 19:13:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.055 [2024-11-27 19:13:54.613980] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:45.055 [2024-11-27 19:13:54.614124] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:45.055 [2024-11-27 19:13:54.614156] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:45.055 [2024-11-27 19:13:54.614184] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:45.055 [2024-11-27 19:13:54.614204] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:45.055 [2024-11-27 19:13:54.614226] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:45.055 19:13:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.055 19:13:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:45.055 19:13:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:45.055 19:13:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:45.055 19:13:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:45.055 19:13:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:45.055 19:13:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:45.055 19:13:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:45.055 19:13:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:45.055 19:13:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:45.055 19:13:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:45.055 19:13:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:45.055 19:13:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:45.055 19:13:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.055 19:13:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.055 19:13:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.055 19:13:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:45.055 "name": "Existed_Raid", 00:15:45.055 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:45.055 "strip_size_kb": 64, 00:15:45.055 "state": "configuring", 00:15:45.055 "raid_level": "raid5f", 00:15:45.055 "superblock": false, 00:15:45.055 "num_base_bdevs": 3, 00:15:45.055 "num_base_bdevs_discovered": 0, 00:15:45.055 "num_base_bdevs_operational": 3, 00:15:45.055 "base_bdevs_list": [ 00:15:45.055 { 00:15:45.055 "name": "BaseBdev1", 00:15:45.055 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:45.055 "is_configured": false, 00:15:45.055 "data_offset": 0, 00:15:45.055 "data_size": 0 00:15:45.055 }, 00:15:45.055 { 00:15:45.055 "name": "BaseBdev2", 00:15:45.055 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:45.055 "is_configured": false, 00:15:45.055 "data_offset": 0, 00:15:45.055 "data_size": 0 00:15:45.055 }, 00:15:45.055 { 00:15:45.055 "name": "BaseBdev3", 00:15:45.055 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:45.055 "is_configured": false, 00:15:45.055 "data_offset": 0, 00:15:45.055 "data_size": 0 00:15:45.055 } 00:15:45.055 ] 00:15:45.055 }' 00:15:45.055 19:13:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:45.055 19:13:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.624 19:13:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:45.624 19:13:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.624 19:13:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.624 [2024-11-27 19:13:55.089065] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:45.624 [2024-11-27 19:13:55.089104] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:15:45.624 19:13:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.624 19:13:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:45.624 19:13:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.624 19:13:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.624 [2024-11-27 19:13:55.101059] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:45.624 [2024-11-27 19:13:55.101146] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:45.624 [2024-11-27 19:13:55.101173] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:45.624 [2024-11-27 19:13:55.101195] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:45.624 [2024-11-27 19:13:55.101212] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:45.624 [2024-11-27 19:13:55.101232] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:45.624 19:13:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.624 19:13:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:45.624 19:13:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.624 19:13:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.624 BaseBdev1 00:15:45.624 [2024-11-27 19:13:55.154833] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:45.624 19:13:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.624 19:13:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:15:45.624 19:13:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:45.624 19:13:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:45.624 19:13:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:45.624 19:13:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:45.624 19:13:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:45.624 19:13:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:45.625 19:13:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.625 19:13:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.625 19:13:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.625 19:13:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:45.625 19:13:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.625 19:13:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.625 [ 00:15:45.625 { 00:15:45.625 "name": "BaseBdev1", 00:15:45.625 "aliases": [ 00:15:45.625 "6e5514e6-7f85-46c6-85c3-3b5e245a24ae" 00:15:45.625 ], 00:15:45.625 "product_name": "Malloc disk", 00:15:45.625 "block_size": 512, 00:15:45.625 "num_blocks": 65536, 00:15:45.625 "uuid": "6e5514e6-7f85-46c6-85c3-3b5e245a24ae", 00:15:45.625 "assigned_rate_limits": { 00:15:45.625 "rw_ios_per_sec": 0, 00:15:45.625 "rw_mbytes_per_sec": 0, 00:15:45.625 "r_mbytes_per_sec": 0, 00:15:45.625 "w_mbytes_per_sec": 0 00:15:45.625 }, 00:15:45.625 "claimed": true, 00:15:45.625 "claim_type": "exclusive_write", 00:15:45.625 "zoned": false, 00:15:45.625 "supported_io_types": { 00:15:45.625 "read": true, 00:15:45.625 "write": true, 00:15:45.625 "unmap": true, 00:15:45.625 "flush": true, 00:15:45.625 "reset": true, 00:15:45.625 "nvme_admin": false, 00:15:45.625 "nvme_io": false, 00:15:45.625 "nvme_io_md": false, 00:15:45.625 "write_zeroes": true, 00:15:45.625 "zcopy": true, 00:15:45.625 "get_zone_info": false, 00:15:45.625 "zone_management": false, 00:15:45.625 "zone_append": false, 00:15:45.625 "compare": false, 00:15:45.625 "compare_and_write": false, 00:15:45.625 "abort": true, 00:15:45.625 "seek_hole": false, 00:15:45.625 "seek_data": false, 00:15:45.625 "copy": true, 00:15:45.625 "nvme_iov_md": false 00:15:45.625 }, 00:15:45.625 "memory_domains": [ 00:15:45.625 { 00:15:45.625 "dma_device_id": "system", 00:15:45.625 "dma_device_type": 1 00:15:45.625 }, 00:15:45.625 { 00:15:45.625 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:45.625 "dma_device_type": 2 00:15:45.625 } 00:15:45.625 ], 00:15:45.625 "driver_specific": {} 00:15:45.625 } 00:15:45.625 ] 00:15:45.625 19:13:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.625 19:13:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:45.625 19:13:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:45.625 19:13:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:45.625 19:13:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:45.625 19:13:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:45.625 19:13:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:45.625 19:13:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:45.625 19:13:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:45.625 19:13:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:45.625 19:13:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:45.625 19:13:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:45.625 19:13:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:45.625 19:13:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:45.625 19:13:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.625 19:13:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.625 19:13:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.625 19:13:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:45.625 "name": "Existed_Raid", 00:15:45.625 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:45.625 "strip_size_kb": 64, 00:15:45.625 "state": "configuring", 00:15:45.625 "raid_level": "raid5f", 00:15:45.625 "superblock": false, 00:15:45.625 "num_base_bdevs": 3, 00:15:45.625 "num_base_bdevs_discovered": 1, 00:15:45.625 "num_base_bdevs_operational": 3, 00:15:45.625 "base_bdevs_list": [ 00:15:45.625 { 00:15:45.625 "name": "BaseBdev1", 00:15:45.625 "uuid": "6e5514e6-7f85-46c6-85c3-3b5e245a24ae", 00:15:45.625 "is_configured": true, 00:15:45.625 "data_offset": 0, 00:15:45.625 "data_size": 65536 00:15:45.625 }, 00:15:45.625 { 00:15:45.625 "name": "BaseBdev2", 00:15:45.625 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:45.625 "is_configured": false, 00:15:45.625 "data_offset": 0, 00:15:45.625 "data_size": 0 00:15:45.625 }, 00:15:45.625 { 00:15:45.625 "name": "BaseBdev3", 00:15:45.625 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:45.625 "is_configured": false, 00:15:45.625 "data_offset": 0, 00:15:45.625 "data_size": 0 00:15:45.625 } 00:15:45.625 ] 00:15:45.625 }' 00:15:45.625 19:13:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:45.625 19:13:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.194 19:13:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:46.194 19:13:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.195 19:13:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.195 [2024-11-27 19:13:55.626046] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:46.195 [2024-11-27 19:13:55.626163] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:15:46.195 19:13:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.195 19:13:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:46.195 19:13:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.195 19:13:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.195 [2024-11-27 19:13:55.638066] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:46.195 [2024-11-27 19:13:55.640183] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:46.195 [2024-11-27 19:13:55.640264] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:46.195 [2024-11-27 19:13:55.640293] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:46.195 [2024-11-27 19:13:55.640316] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:46.195 19:13:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.195 19:13:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:15:46.195 19:13:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:46.195 19:13:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:46.195 19:13:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:46.195 19:13:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:46.195 19:13:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:46.195 19:13:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:46.195 19:13:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:46.195 19:13:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:46.195 19:13:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:46.195 19:13:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:46.195 19:13:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:46.195 19:13:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:46.195 19:13:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:46.195 19:13:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.195 19:13:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.195 19:13:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.195 19:13:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:46.195 "name": "Existed_Raid", 00:15:46.195 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:46.195 "strip_size_kb": 64, 00:15:46.195 "state": "configuring", 00:15:46.195 "raid_level": "raid5f", 00:15:46.195 "superblock": false, 00:15:46.195 "num_base_bdevs": 3, 00:15:46.195 "num_base_bdevs_discovered": 1, 00:15:46.195 "num_base_bdevs_operational": 3, 00:15:46.195 "base_bdevs_list": [ 00:15:46.195 { 00:15:46.195 "name": "BaseBdev1", 00:15:46.195 "uuid": "6e5514e6-7f85-46c6-85c3-3b5e245a24ae", 00:15:46.195 "is_configured": true, 00:15:46.195 "data_offset": 0, 00:15:46.195 "data_size": 65536 00:15:46.195 }, 00:15:46.195 { 00:15:46.195 "name": "BaseBdev2", 00:15:46.195 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:46.195 "is_configured": false, 00:15:46.195 "data_offset": 0, 00:15:46.195 "data_size": 0 00:15:46.195 }, 00:15:46.195 { 00:15:46.195 "name": "BaseBdev3", 00:15:46.195 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:46.195 "is_configured": false, 00:15:46.195 "data_offset": 0, 00:15:46.195 "data_size": 0 00:15:46.195 } 00:15:46.195 ] 00:15:46.195 }' 00:15:46.195 19:13:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:46.195 19:13:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.455 19:13:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:46.455 19:13:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.455 19:13:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.455 [2024-11-27 19:13:56.076329] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:46.455 BaseBdev2 00:15:46.455 19:13:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.455 19:13:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:15:46.455 19:13:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:46.455 19:13:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:46.455 19:13:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:46.455 19:13:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:46.455 19:13:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:46.455 19:13:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:46.455 19:13:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.455 19:13:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.455 19:13:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.455 19:13:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:46.455 19:13:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.455 19:13:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.715 [ 00:15:46.715 { 00:15:46.715 "name": "BaseBdev2", 00:15:46.715 "aliases": [ 00:15:46.715 "26eb5e5f-92dd-4fbf-8842-0cf9c4ad6dc8" 00:15:46.715 ], 00:15:46.715 "product_name": "Malloc disk", 00:15:46.715 "block_size": 512, 00:15:46.715 "num_blocks": 65536, 00:15:46.715 "uuid": "26eb5e5f-92dd-4fbf-8842-0cf9c4ad6dc8", 00:15:46.715 "assigned_rate_limits": { 00:15:46.715 "rw_ios_per_sec": 0, 00:15:46.715 "rw_mbytes_per_sec": 0, 00:15:46.715 "r_mbytes_per_sec": 0, 00:15:46.715 "w_mbytes_per_sec": 0 00:15:46.715 }, 00:15:46.715 "claimed": true, 00:15:46.715 "claim_type": "exclusive_write", 00:15:46.715 "zoned": false, 00:15:46.715 "supported_io_types": { 00:15:46.715 "read": true, 00:15:46.715 "write": true, 00:15:46.715 "unmap": true, 00:15:46.715 "flush": true, 00:15:46.715 "reset": true, 00:15:46.715 "nvme_admin": false, 00:15:46.715 "nvme_io": false, 00:15:46.715 "nvme_io_md": false, 00:15:46.715 "write_zeroes": true, 00:15:46.715 "zcopy": true, 00:15:46.715 "get_zone_info": false, 00:15:46.715 "zone_management": false, 00:15:46.715 "zone_append": false, 00:15:46.715 "compare": false, 00:15:46.715 "compare_and_write": false, 00:15:46.715 "abort": true, 00:15:46.715 "seek_hole": false, 00:15:46.715 "seek_data": false, 00:15:46.715 "copy": true, 00:15:46.715 "nvme_iov_md": false 00:15:46.715 }, 00:15:46.715 "memory_domains": [ 00:15:46.715 { 00:15:46.715 "dma_device_id": "system", 00:15:46.715 "dma_device_type": 1 00:15:46.715 }, 00:15:46.715 { 00:15:46.715 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:46.715 "dma_device_type": 2 00:15:46.715 } 00:15:46.715 ], 00:15:46.715 "driver_specific": {} 00:15:46.715 } 00:15:46.715 ] 00:15:46.715 19:13:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.715 19:13:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:46.715 19:13:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:46.715 19:13:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:46.715 19:13:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:46.715 19:13:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:46.715 19:13:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:46.715 19:13:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:46.715 19:13:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:46.715 19:13:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:46.715 19:13:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:46.715 19:13:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:46.715 19:13:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:46.715 19:13:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:46.715 19:13:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:46.715 19:13:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:46.715 19:13:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.715 19:13:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.715 19:13:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.715 19:13:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:46.715 "name": "Existed_Raid", 00:15:46.715 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:46.715 "strip_size_kb": 64, 00:15:46.715 "state": "configuring", 00:15:46.715 "raid_level": "raid5f", 00:15:46.715 "superblock": false, 00:15:46.715 "num_base_bdevs": 3, 00:15:46.716 "num_base_bdevs_discovered": 2, 00:15:46.716 "num_base_bdevs_operational": 3, 00:15:46.716 "base_bdevs_list": [ 00:15:46.716 { 00:15:46.716 "name": "BaseBdev1", 00:15:46.716 "uuid": "6e5514e6-7f85-46c6-85c3-3b5e245a24ae", 00:15:46.716 "is_configured": true, 00:15:46.716 "data_offset": 0, 00:15:46.716 "data_size": 65536 00:15:46.716 }, 00:15:46.716 { 00:15:46.716 "name": "BaseBdev2", 00:15:46.716 "uuid": "26eb5e5f-92dd-4fbf-8842-0cf9c4ad6dc8", 00:15:46.716 "is_configured": true, 00:15:46.716 "data_offset": 0, 00:15:46.716 "data_size": 65536 00:15:46.716 }, 00:15:46.716 { 00:15:46.716 "name": "BaseBdev3", 00:15:46.716 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:46.716 "is_configured": false, 00:15:46.716 "data_offset": 0, 00:15:46.716 "data_size": 0 00:15:46.716 } 00:15:46.716 ] 00:15:46.716 }' 00:15:46.716 19:13:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:46.716 19:13:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.976 19:13:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:46.976 19:13:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.976 19:13:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.236 [2024-11-27 19:13:56.644455] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:47.237 [2024-11-27 19:13:56.644632] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:47.237 [2024-11-27 19:13:56.644669] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:15:47.237 [2024-11-27 19:13:56.645018] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:47.237 [2024-11-27 19:13:56.650529] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:47.237 [2024-11-27 19:13:56.650586] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:15:47.237 [2024-11-27 19:13:56.650928] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:47.237 BaseBdev3 00:15:47.237 19:13:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.237 19:13:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:15:47.237 19:13:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:47.237 19:13:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:47.237 19:13:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:47.237 19:13:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:47.237 19:13:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:47.237 19:13:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:47.237 19:13:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.237 19:13:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.237 19:13:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.237 19:13:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:47.237 19:13:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.237 19:13:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.237 [ 00:15:47.237 { 00:15:47.237 "name": "BaseBdev3", 00:15:47.237 "aliases": [ 00:15:47.237 "ea46e31d-f6da-4689-b8db-0b26607e9851" 00:15:47.237 ], 00:15:47.237 "product_name": "Malloc disk", 00:15:47.237 "block_size": 512, 00:15:47.237 "num_blocks": 65536, 00:15:47.237 "uuid": "ea46e31d-f6da-4689-b8db-0b26607e9851", 00:15:47.237 "assigned_rate_limits": { 00:15:47.237 "rw_ios_per_sec": 0, 00:15:47.237 "rw_mbytes_per_sec": 0, 00:15:47.237 "r_mbytes_per_sec": 0, 00:15:47.237 "w_mbytes_per_sec": 0 00:15:47.237 }, 00:15:47.237 "claimed": true, 00:15:47.237 "claim_type": "exclusive_write", 00:15:47.237 "zoned": false, 00:15:47.237 "supported_io_types": { 00:15:47.237 "read": true, 00:15:47.237 "write": true, 00:15:47.237 "unmap": true, 00:15:47.237 "flush": true, 00:15:47.237 "reset": true, 00:15:47.237 "nvme_admin": false, 00:15:47.237 "nvme_io": false, 00:15:47.237 "nvme_io_md": false, 00:15:47.237 "write_zeroes": true, 00:15:47.237 "zcopy": true, 00:15:47.237 "get_zone_info": false, 00:15:47.237 "zone_management": false, 00:15:47.237 "zone_append": false, 00:15:47.237 "compare": false, 00:15:47.237 "compare_and_write": false, 00:15:47.237 "abort": true, 00:15:47.237 "seek_hole": false, 00:15:47.237 "seek_data": false, 00:15:47.237 "copy": true, 00:15:47.237 "nvme_iov_md": false 00:15:47.237 }, 00:15:47.237 "memory_domains": [ 00:15:47.237 { 00:15:47.237 "dma_device_id": "system", 00:15:47.237 "dma_device_type": 1 00:15:47.237 }, 00:15:47.237 { 00:15:47.237 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:47.237 "dma_device_type": 2 00:15:47.237 } 00:15:47.237 ], 00:15:47.237 "driver_specific": {} 00:15:47.237 } 00:15:47.237 ] 00:15:47.237 19:13:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.237 19:13:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:47.237 19:13:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:47.237 19:13:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:47.237 19:13:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:15:47.237 19:13:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:47.237 19:13:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:47.237 19:13:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:47.237 19:13:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:47.237 19:13:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:47.237 19:13:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:47.237 19:13:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:47.237 19:13:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:47.237 19:13:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:47.237 19:13:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:47.237 19:13:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.237 19:13:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:47.237 19:13:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.237 19:13:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.237 19:13:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:47.237 "name": "Existed_Raid", 00:15:47.237 "uuid": "8c6bdd63-bcbc-4de6-bc75-07e145674d08", 00:15:47.237 "strip_size_kb": 64, 00:15:47.237 "state": "online", 00:15:47.237 "raid_level": "raid5f", 00:15:47.237 "superblock": false, 00:15:47.237 "num_base_bdevs": 3, 00:15:47.237 "num_base_bdevs_discovered": 3, 00:15:47.237 "num_base_bdevs_operational": 3, 00:15:47.237 "base_bdevs_list": [ 00:15:47.237 { 00:15:47.237 "name": "BaseBdev1", 00:15:47.237 "uuid": "6e5514e6-7f85-46c6-85c3-3b5e245a24ae", 00:15:47.237 "is_configured": true, 00:15:47.237 "data_offset": 0, 00:15:47.237 "data_size": 65536 00:15:47.237 }, 00:15:47.237 { 00:15:47.237 "name": "BaseBdev2", 00:15:47.237 "uuid": "26eb5e5f-92dd-4fbf-8842-0cf9c4ad6dc8", 00:15:47.237 "is_configured": true, 00:15:47.237 "data_offset": 0, 00:15:47.237 "data_size": 65536 00:15:47.237 }, 00:15:47.237 { 00:15:47.237 "name": "BaseBdev3", 00:15:47.237 "uuid": "ea46e31d-f6da-4689-b8db-0b26607e9851", 00:15:47.237 "is_configured": true, 00:15:47.237 "data_offset": 0, 00:15:47.237 "data_size": 65536 00:15:47.237 } 00:15:47.237 ] 00:15:47.237 }' 00:15:47.237 19:13:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:47.237 19:13:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.806 19:13:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:15:47.806 19:13:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:47.806 19:13:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:47.806 19:13:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:47.806 19:13:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:47.806 19:13:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:47.806 19:13:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:47.806 19:13:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.807 19:13:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.807 19:13:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:47.807 [2024-11-27 19:13:57.161093] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:47.807 19:13:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.807 19:13:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:47.807 "name": "Existed_Raid", 00:15:47.807 "aliases": [ 00:15:47.807 "8c6bdd63-bcbc-4de6-bc75-07e145674d08" 00:15:47.807 ], 00:15:47.807 "product_name": "Raid Volume", 00:15:47.807 "block_size": 512, 00:15:47.807 "num_blocks": 131072, 00:15:47.807 "uuid": "8c6bdd63-bcbc-4de6-bc75-07e145674d08", 00:15:47.807 "assigned_rate_limits": { 00:15:47.807 "rw_ios_per_sec": 0, 00:15:47.807 "rw_mbytes_per_sec": 0, 00:15:47.807 "r_mbytes_per_sec": 0, 00:15:47.807 "w_mbytes_per_sec": 0 00:15:47.807 }, 00:15:47.807 "claimed": false, 00:15:47.807 "zoned": false, 00:15:47.807 "supported_io_types": { 00:15:47.807 "read": true, 00:15:47.807 "write": true, 00:15:47.807 "unmap": false, 00:15:47.807 "flush": false, 00:15:47.807 "reset": true, 00:15:47.807 "nvme_admin": false, 00:15:47.807 "nvme_io": false, 00:15:47.807 "nvme_io_md": false, 00:15:47.807 "write_zeroes": true, 00:15:47.807 "zcopy": false, 00:15:47.807 "get_zone_info": false, 00:15:47.807 "zone_management": false, 00:15:47.807 "zone_append": false, 00:15:47.807 "compare": false, 00:15:47.807 "compare_and_write": false, 00:15:47.807 "abort": false, 00:15:47.807 "seek_hole": false, 00:15:47.807 "seek_data": false, 00:15:47.807 "copy": false, 00:15:47.807 "nvme_iov_md": false 00:15:47.807 }, 00:15:47.807 "driver_specific": { 00:15:47.807 "raid": { 00:15:47.807 "uuid": "8c6bdd63-bcbc-4de6-bc75-07e145674d08", 00:15:47.807 "strip_size_kb": 64, 00:15:47.807 "state": "online", 00:15:47.807 "raid_level": "raid5f", 00:15:47.807 "superblock": false, 00:15:47.807 "num_base_bdevs": 3, 00:15:47.807 "num_base_bdevs_discovered": 3, 00:15:47.807 "num_base_bdevs_operational": 3, 00:15:47.807 "base_bdevs_list": [ 00:15:47.807 { 00:15:47.807 "name": "BaseBdev1", 00:15:47.807 "uuid": "6e5514e6-7f85-46c6-85c3-3b5e245a24ae", 00:15:47.807 "is_configured": true, 00:15:47.807 "data_offset": 0, 00:15:47.807 "data_size": 65536 00:15:47.807 }, 00:15:47.807 { 00:15:47.807 "name": "BaseBdev2", 00:15:47.807 "uuid": "26eb5e5f-92dd-4fbf-8842-0cf9c4ad6dc8", 00:15:47.807 "is_configured": true, 00:15:47.807 "data_offset": 0, 00:15:47.807 "data_size": 65536 00:15:47.807 }, 00:15:47.807 { 00:15:47.807 "name": "BaseBdev3", 00:15:47.807 "uuid": "ea46e31d-f6da-4689-b8db-0b26607e9851", 00:15:47.807 "is_configured": true, 00:15:47.807 "data_offset": 0, 00:15:47.807 "data_size": 65536 00:15:47.807 } 00:15:47.807 ] 00:15:47.807 } 00:15:47.807 } 00:15:47.807 }' 00:15:47.807 19:13:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:47.807 19:13:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:15:47.807 BaseBdev2 00:15:47.807 BaseBdev3' 00:15:47.807 19:13:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:47.807 19:13:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:47.807 19:13:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:47.807 19:13:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:15:47.807 19:13:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:47.807 19:13:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.807 19:13:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.807 19:13:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.807 19:13:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:47.807 19:13:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:47.807 19:13:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:47.807 19:13:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:47.807 19:13:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:47.807 19:13:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.807 19:13:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.807 19:13:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.807 19:13:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:47.807 19:13:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:47.807 19:13:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:47.807 19:13:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:47.807 19:13:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.807 19:13:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:47.807 19:13:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.807 19:13:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.807 19:13:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:47.807 19:13:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:47.807 19:13:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:47.807 19:13:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.807 19:13:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.807 [2024-11-27 19:13:57.432504] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:48.068 19:13:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.068 19:13:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:15:48.068 19:13:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:15:48.068 19:13:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:48.068 19:13:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:15:48.068 19:13:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:15:48.068 19:13:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:15:48.068 19:13:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:48.068 19:13:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:48.068 19:13:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:48.068 19:13:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:48.068 19:13:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:48.068 19:13:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:48.068 19:13:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:48.068 19:13:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:48.068 19:13:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:48.068 19:13:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:48.068 19:13:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:48.068 19:13:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.068 19:13:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.068 19:13:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.068 19:13:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:48.068 "name": "Existed_Raid", 00:15:48.068 "uuid": "8c6bdd63-bcbc-4de6-bc75-07e145674d08", 00:15:48.068 "strip_size_kb": 64, 00:15:48.068 "state": "online", 00:15:48.068 "raid_level": "raid5f", 00:15:48.068 "superblock": false, 00:15:48.068 "num_base_bdevs": 3, 00:15:48.068 "num_base_bdevs_discovered": 2, 00:15:48.068 "num_base_bdevs_operational": 2, 00:15:48.068 "base_bdevs_list": [ 00:15:48.068 { 00:15:48.068 "name": null, 00:15:48.068 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:48.068 "is_configured": false, 00:15:48.068 "data_offset": 0, 00:15:48.068 "data_size": 65536 00:15:48.068 }, 00:15:48.068 { 00:15:48.068 "name": "BaseBdev2", 00:15:48.068 "uuid": "26eb5e5f-92dd-4fbf-8842-0cf9c4ad6dc8", 00:15:48.068 "is_configured": true, 00:15:48.068 "data_offset": 0, 00:15:48.068 "data_size": 65536 00:15:48.068 }, 00:15:48.068 { 00:15:48.068 "name": "BaseBdev3", 00:15:48.068 "uuid": "ea46e31d-f6da-4689-b8db-0b26607e9851", 00:15:48.068 "is_configured": true, 00:15:48.068 "data_offset": 0, 00:15:48.068 "data_size": 65536 00:15:48.068 } 00:15:48.068 ] 00:15:48.068 }' 00:15:48.068 19:13:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:48.068 19:13:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.328 19:13:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:15:48.328 19:13:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:48.328 19:13:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:48.328 19:13:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:48.328 19:13:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.328 19:13:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.328 19:13:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.588 19:13:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:48.588 19:13:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:48.588 19:13:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:15:48.588 19:13:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.588 19:13:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.589 [2024-11-27 19:13:57.975219] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:48.589 [2024-11-27 19:13:57.975375] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:48.589 [2024-11-27 19:13:58.077811] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:48.589 19:13:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.589 19:13:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:48.589 19:13:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:48.589 19:13:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:48.589 19:13:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:48.589 19:13:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.589 19:13:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.589 19:13:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.589 19:13:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:48.589 19:13:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:48.589 19:13:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:15:48.589 19:13:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.589 19:13:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.589 [2024-11-27 19:13:58.137749] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:48.589 [2024-11-27 19:13:58.137844] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:15:48.849 19:13:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.849 19:13:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:48.849 19:13:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:48.849 19:13:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:48.849 19:13:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.849 19:13:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:15:48.849 19:13:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.849 19:13:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.849 19:13:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:15:48.849 19:13:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:15:48.849 19:13:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:15:48.849 19:13:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:15:48.849 19:13:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:48.849 19:13:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:48.849 19:13:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.849 19:13:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.849 BaseBdev2 00:15:48.849 19:13:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.849 19:13:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:15:48.849 19:13:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:48.849 19:13:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:48.849 19:13:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:48.849 19:13:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:48.849 19:13:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:48.849 19:13:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:48.849 19:13:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.849 19:13:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.849 19:13:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.849 19:13:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:48.849 19:13:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.849 19:13:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.849 [ 00:15:48.849 { 00:15:48.849 "name": "BaseBdev2", 00:15:48.849 "aliases": [ 00:15:48.849 "546a10be-637f-4854-851c-4005134926bb" 00:15:48.849 ], 00:15:48.849 "product_name": "Malloc disk", 00:15:48.849 "block_size": 512, 00:15:48.849 "num_blocks": 65536, 00:15:48.849 "uuid": "546a10be-637f-4854-851c-4005134926bb", 00:15:48.849 "assigned_rate_limits": { 00:15:48.849 "rw_ios_per_sec": 0, 00:15:48.849 "rw_mbytes_per_sec": 0, 00:15:48.850 "r_mbytes_per_sec": 0, 00:15:48.850 "w_mbytes_per_sec": 0 00:15:48.850 }, 00:15:48.850 "claimed": false, 00:15:48.850 "zoned": false, 00:15:48.850 "supported_io_types": { 00:15:48.850 "read": true, 00:15:48.850 "write": true, 00:15:48.850 "unmap": true, 00:15:48.850 "flush": true, 00:15:48.850 "reset": true, 00:15:48.850 "nvme_admin": false, 00:15:48.850 "nvme_io": false, 00:15:48.850 "nvme_io_md": false, 00:15:48.850 "write_zeroes": true, 00:15:48.850 "zcopy": true, 00:15:48.850 "get_zone_info": false, 00:15:48.850 "zone_management": false, 00:15:48.850 "zone_append": false, 00:15:48.850 "compare": false, 00:15:48.850 "compare_and_write": false, 00:15:48.850 "abort": true, 00:15:48.850 "seek_hole": false, 00:15:48.850 "seek_data": false, 00:15:48.850 "copy": true, 00:15:48.850 "nvme_iov_md": false 00:15:48.850 }, 00:15:48.850 "memory_domains": [ 00:15:48.850 { 00:15:48.850 "dma_device_id": "system", 00:15:48.850 "dma_device_type": 1 00:15:48.850 }, 00:15:48.850 { 00:15:48.850 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:48.850 "dma_device_type": 2 00:15:48.850 } 00:15:48.850 ], 00:15:48.850 "driver_specific": {} 00:15:48.850 } 00:15:48.850 ] 00:15:48.850 19:13:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.850 19:13:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:48.850 19:13:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:48.850 19:13:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:48.850 19:13:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:48.850 19:13:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.850 19:13:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.850 BaseBdev3 00:15:48.850 19:13:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.850 19:13:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:15:48.850 19:13:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:48.850 19:13:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:48.850 19:13:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:48.850 19:13:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:48.850 19:13:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:48.850 19:13:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:48.850 19:13:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.850 19:13:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.850 19:13:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.850 19:13:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:48.850 19:13:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.850 19:13:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.850 [ 00:15:48.850 { 00:15:48.850 "name": "BaseBdev3", 00:15:48.850 "aliases": [ 00:15:48.850 "19be05b7-f34d-405e-b26d-16f231209b6b" 00:15:48.850 ], 00:15:48.850 "product_name": "Malloc disk", 00:15:48.850 "block_size": 512, 00:15:48.850 "num_blocks": 65536, 00:15:48.850 "uuid": "19be05b7-f34d-405e-b26d-16f231209b6b", 00:15:48.850 "assigned_rate_limits": { 00:15:48.850 "rw_ios_per_sec": 0, 00:15:48.850 "rw_mbytes_per_sec": 0, 00:15:48.850 "r_mbytes_per_sec": 0, 00:15:48.850 "w_mbytes_per_sec": 0 00:15:48.850 }, 00:15:48.850 "claimed": false, 00:15:48.850 "zoned": false, 00:15:48.850 "supported_io_types": { 00:15:48.850 "read": true, 00:15:48.850 "write": true, 00:15:48.850 "unmap": true, 00:15:48.850 "flush": true, 00:15:48.850 "reset": true, 00:15:48.850 "nvme_admin": false, 00:15:48.850 "nvme_io": false, 00:15:48.850 "nvme_io_md": false, 00:15:48.850 "write_zeroes": true, 00:15:48.850 "zcopy": true, 00:15:48.850 "get_zone_info": false, 00:15:48.850 "zone_management": false, 00:15:48.850 "zone_append": false, 00:15:48.850 "compare": false, 00:15:48.850 "compare_and_write": false, 00:15:48.850 "abort": true, 00:15:48.850 "seek_hole": false, 00:15:48.850 "seek_data": false, 00:15:48.850 "copy": true, 00:15:48.850 "nvme_iov_md": false 00:15:48.850 }, 00:15:48.850 "memory_domains": [ 00:15:48.850 { 00:15:48.850 "dma_device_id": "system", 00:15:48.850 "dma_device_type": 1 00:15:48.850 }, 00:15:48.850 { 00:15:48.850 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:48.850 "dma_device_type": 2 00:15:48.850 } 00:15:48.850 ], 00:15:48.850 "driver_specific": {} 00:15:48.850 } 00:15:48.850 ] 00:15:48.850 19:13:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.850 19:13:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:48.850 19:13:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:48.850 19:13:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:48.850 19:13:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:48.850 19:13:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.850 19:13:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.850 [2024-11-27 19:13:58.466530] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:48.850 [2024-11-27 19:13:58.466575] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:48.850 [2024-11-27 19:13:58.466597] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:48.850 [2024-11-27 19:13:58.468657] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:48.850 19:13:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.850 19:13:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:48.850 19:13:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:48.850 19:13:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:48.850 19:13:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:48.850 19:13:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:48.850 19:13:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:48.850 19:13:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:48.850 19:13:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:48.850 19:13:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:48.850 19:13:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:48.850 19:13:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:48.850 19:13:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.850 19:13:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:48.850 19:13:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.110 19:13:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.110 19:13:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:49.110 "name": "Existed_Raid", 00:15:49.110 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:49.110 "strip_size_kb": 64, 00:15:49.110 "state": "configuring", 00:15:49.110 "raid_level": "raid5f", 00:15:49.110 "superblock": false, 00:15:49.110 "num_base_bdevs": 3, 00:15:49.110 "num_base_bdevs_discovered": 2, 00:15:49.110 "num_base_bdevs_operational": 3, 00:15:49.110 "base_bdevs_list": [ 00:15:49.110 { 00:15:49.110 "name": "BaseBdev1", 00:15:49.110 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:49.110 "is_configured": false, 00:15:49.110 "data_offset": 0, 00:15:49.110 "data_size": 0 00:15:49.110 }, 00:15:49.110 { 00:15:49.110 "name": "BaseBdev2", 00:15:49.110 "uuid": "546a10be-637f-4854-851c-4005134926bb", 00:15:49.110 "is_configured": true, 00:15:49.110 "data_offset": 0, 00:15:49.110 "data_size": 65536 00:15:49.110 }, 00:15:49.110 { 00:15:49.110 "name": "BaseBdev3", 00:15:49.110 "uuid": "19be05b7-f34d-405e-b26d-16f231209b6b", 00:15:49.110 "is_configured": true, 00:15:49.110 "data_offset": 0, 00:15:49.110 "data_size": 65536 00:15:49.110 } 00:15:49.110 ] 00:15:49.110 }' 00:15:49.110 19:13:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:49.110 19:13:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.371 19:13:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:49.371 19:13:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.371 19:13:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.371 [2024-11-27 19:13:58.893864] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:49.371 19:13:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.371 19:13:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:49.371 19:13:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:49.371 19:13:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:49.371 19:13:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:49.371 19:13:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:49.371 19:13:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:49.371 19:13:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:49.371 19:13:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:49.371 19:13:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:49.371 19:13:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:49.371 19:13:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.371 19:13:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.371 19:13:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.371 19:13:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:49.371 19:13:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.371 19:13:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:49.371 "name": "Existed_Raid", 00:15:49.371 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:49.371 "strip_size_kb": 64, 00:15:49.371 "state": "configuring", 00:15:49.371 "raid_level": "raid5f", 00:15:49.371 "superblock": false, 00:15:49.371 "num_base_bdevs": 3, 00:15:49.371 "num_base_bdevs_discovered": 1, 00:15:49.371 "num_base_bdevs_operational": 3, 00:15:49.371 "base_bdevs_list": [ 00:15:49.371 { 00:15:49.371 "name": "BaseBdev1", 00:15:49.371 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:49.371 "is_configured": false, 00:15:49.371 "data_offset": 0, 00:15:49.371 "data_size": 0 00:15:49.371 }, 00:15:49.371 { 00:15:49.371 "name": null, 00:15:49.371 "uuid": "546a10be-637f-4854-851c-4005134926bb", 00:15:49.371 "is_configured": false, 00:15:49.371 "data_offset": 0, 00:15:49.371 "data_size": 65536 00:15:49.371 }, 00:15:49.371 { 00:15:49.371 "name": "BaseBdev3", 00:15:49.371 "uuid": "19be05b7-f34d-405e-b26d-16f231209b6b", 00:15:49.371 "is_configured": true, 00:15:49.371 "data_offset": 0, 00:15:49.371 "data_size": 65536 00:15:49.371 } 00:15:49.371 ] 00:15:49.371 }' 00:15:49.371 19:13:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:49.371 19:13:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.940 19:13:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.940 19:13:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.940 19:13:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.940 19:13:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:49.940 19:13:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.940 19:13:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:15:49.940 19:13:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:49.940 19:13:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.940 19:13:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.940 [2024-11-27 19:13:59.423739] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:49.940 BaseBdev1 00:15:49.940 19:13:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.940 19:13:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:15:49.940 19:13:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:49.940 19:13:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:49.940 19:13:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:49.940 19:13:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:49.940 19:13:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:49.940 19:13:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:49.940 19:13:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.940 19:13:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.940 19:13:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.940 19:13:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:49.940 19:13:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.940 19:13:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.940 [ 00:15:49.940 { 00:15:49.940 "name": "BaseBdev1", 00:15:49.940 "aliases": [ 00:15:49.940 "d0382913-3456-4aa8-a11c-81541a6e9251" 00:15:49.940 ], 00:15:49.940 "product_name": "Malloc disk", 00:15:49.940 "block_size": 512, 00:15:49.940 "num_blocks": 65536, 00:15:49.940 "uuid": "d0382913-3456-4aa8-a11c-81541a6e9251", 00:15:49.940 "assigned_rate_limits": { 00:15:49.940 "rw_ios_per_sec": 0, 00:15:49.940 "rw_mbytes_per_sec": 0, 00:15:49.940 "r_mbytes_per_sec": 0, 00:15:49.940 "w_mbytes_per_sec": 0 00:15:49.940 }, 00:15:49.940 "claimed": true, 00:15:49.940 "claim_type": "exclusive_write", 00:15:49.940 "zoned": false, 00:15:49.940 "supported_io_types": { 00:15:49.940 "read": true, 00:15:49.940 "write": true, 00:15:49.940 "unmap": true, 00:15:49.940 "flush": true, 00:15:49.940 "reset": true, 00:15:49.940 "nvme_admin": false, 00:15:49.940 "nvme_io": false, 00:15:49.940 "nvme_io_md": false, 00:15:49.940 "write_zeroes": true, 00:15:49.940 "zcopy": true, 00:15:49.940 "get_zone_info": false, 00:15:49.940 "zone_management": false, 00:15:49.940 "zone_append": false, 00:15:49.940 "compare": false, 00:15:49.940 "compare_and_write": false, 00:15:49.940 "abort": true, 00:15:49.940 "seek_hole": false, 00:15:49.940 "seek_data": false, 00:15:49.940 "copy": true, 00:15:49.940 "nvme_iov_md": false 00:15:49.940 }, 00:15:49.940 "memory_domains": [ 00:15:49.940 { 00:15:49.940 "dma_device_id": "system", 00:15:49.940 "dma_device_type": 1 00:15:49.940 }, 00:15:49.940 { 00:15:49.940 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:49.940 "dma_device_type": 2 00:15:49.940 } 00:15:49.940 ], 00:15:49.940 "driver_specific": {} 00:15:49.940 } 00:15:49.940 ] 00:15:49.940 19:13:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.940 19:13:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:49.940 19:13:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:49.940 19:13:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:49.940 19:13:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:49.940 19:13:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:49.940 19:13:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:49.940 19:13:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:49.940 19:13:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:49.940 19:13:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:49.940 19:13:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:49.940 19:13:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:49.940 19:13:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.940 19:13:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:49.940 19:13:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.940 19:13:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.940 19:13:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.940 19:13:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:49.940 "name": "Existed_Raid", 00:15:49.940 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:49.940 "strip_size_kb": 64, 00:15:49.940 "state": "configuring", 00:15:49.940 "raid_level": "raid5f", 00:15:49.940 "superblock": false, 00:15:49.940 "num_base_bdevs": 3, 00:15:49.940 "num_base_bdevs_discovered": 2, 00:15:49.940 "num_base_bdevs_operational": 3, 00:15:49.940 "base_bdevs_list": [ 00:15:49.940 { 00:15:49.940 "name": "BaseBdev1", 00:15:49.940 "uuid": "d0382913-3456-4aa8-a11c-81541a6e9251", 00:15:49.941 "is_configured": true, 00:15:49.941 "data_offset": 0, 00:15:49.941 "data_size": 65536 00:15:49.941 }, 00:15:49.941 { 00:15:49.941 "name": null, 00:15:49.941 "uuid": "546a10be-637f-4854-851c-4005134926bb", 00:15:49.941 "is_configured": false, 00:15:49.941 "data_offset": 0, 00:15:49.941 "data_size": 65536 00:15:49.941 }, 00:15:49.941 { 00:15:49.941 "name": "BaseBdev3", 00:15:49.941 "uuid": "19be05b7-f34d-405e-b26d-16f231209b6b", 00:15:49.941 "is_configured": true, 00:15:49.941 "data_offset": 0, 00:15:49.941 "data_size": 65536 00:15:49.941 } 00:15:49.941 ] 00:15:49.941 }' 00:15:49.941 19:13:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:49.941 19:13:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.200 19:13:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:50.200 19:13:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.460 19:13:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.460 19:13:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.460 19:13:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.460 19:13:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:15:50.460 19:13:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:15:50.460 19:13:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.460 19:13:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.460 [2024-11-27 19:13:59.867015] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:50.460 19:13:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.460 19:13:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:50.460 19:13:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:50.460 19:13:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:50.460 19:13:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:50.460 19:13:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:50.460 19:13:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:50.460 19:13:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:50.460 19:13:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:50.460 19:13:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:50.460 19:13:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:50.460 19:13:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.460 19:13:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:50.460 19:13:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.460 19:13:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.460 19:13:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.460 19:13:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:50.460 "name": "Existed_Raid", 00:15:50.460 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:50.460 "strip_size_kb": 64, 00:15:50.460 "state": "configuring", 00:15:50.460 "raid_level": "raid5f", 00:15:50.460 "superblock": false, 00:15:50.460 "num_base_bdevs": 3, 00:15:50.460 "num_base_bdevs_discovered": 1, 00:15:50.460 "num_base_bdevs_operational": 3, 00:15:50.460 "base_bdevs_list": [ 00:15:50.460 { 00:15:50.460 "name": "BaseBdev1", 00:15:50.460 "uuid": "d0382913-3456-4aa8-a11c-81541a6e9251", 00:15:50.460 "is_configured": true, 00:15:50.460 "data_offset": 0, 00:15:50.460 "data_size": 65536 00:15:50.460 }, 00:15:50.460 { 00:15:50.460 "name": null, 00:15:50.460 "uuid": "546a10be-637f-4854-851c-4005134926bb", 00:15:50.460 "is_configured": false, 00:15:50.460 "data_offset": 0, 00:15:50.460 "data_size": 65536 00:15:50.460 }, 00:15:50.460 { 00:15:50.460 "name": null, 00:15:50.460 "uuid": "19be05b7-f34d-405e-b26d-16f231209b6b", 00:15:50.460 "is_configured": false, 00:15:50.460 "data_offset": 0, 00:15:50.460 "data_size": 65536 00:15:50.460 } 00:15:50.460 ] 00:15:50.460 }' 00:15:50.460 19:13:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:50.460 19:13:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.720 19:14:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.720 19:14:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.720 19:14:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.720 19:14:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:50.720 19:14:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.981 19:14:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:15:50.981 19:14:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:15:50.981 19:14:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.981 19:14:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.981 [2024-11-27 19:14:00.378169] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:50.981 19:14:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.981 19:14:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:50.981 19:14:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:50.981 19:14:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:50.981 19:14:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:50.981 19:14:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:50.981 19:14:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:50.981 19:14:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:50.981 19:14:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:50.981 19:14:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:50.981 19:14:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:50.981 19:14:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.981 19:14:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.981 19:14:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.981 19:14:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:50.981 19:14:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.981 19:14:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:50.981 "name": "Existed_Raid", 00:15:50.981 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:50.981 "strip_size_kb": 64, 00:15:50.981 "state": "configuring", 00:15:50.981 "raid_level": "raid5f", 00:15:50.981 "superblock": false, 00:15:50.981 "num_base_bdevs": 3, 00:15:50.981 "num_base_bdevs_discovered": 2, 00:15:50.981 "num_base_bdevs_operational": 3, 00:15:50.981 "base_bdevs_list": [ 00:15:50.981 { 00:15:50.981 "name": "BaseBdev1", 00:15:50.981 "uuid": "d0382913-3456-4aa8-a11c-81541a6e9251", 00:15:50.981 "is_configured": true, 00:15:50.981 "data_offset": 0, 00:15:50.981 "data_size": 65536 00:15:50.981 }, 00:15:50.981 { 00:15:50.981 "name": null, 00:15:50.981 "uuid": "546a10be-637f-4854-851c-4005134926bb", 00:15:50.981 "is_configured": false, 00:15:50.981 "data_offset": 0, 00:15:50.981 "data_size": 65536 00:15:50.981 }, 00:15:50.981 { 00:15:50.981 "name": "BaseBdev3", 00:15:50.981 "uuid": "19be05b7-f34d-405e-b26d-16f231209b6b", 00:15:50.981 "is_configured": true, 00:15:50.981 "data_offset": 0, 00:15:50.981 "data_size": 65536 00:15:50.981 } 00:15:50.981 ] 00:15:50.981 }' 00:15:50.981 19:14:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:50.981 19:14:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.241 19:14:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:51.241 19:14:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.241 19:14:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.241 19:14:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.501 19:14:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.501 19:14:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:15:51.501 19:14:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:51.501 19:14:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.501 19:14:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.501 [2024-11-27 19:14:00.889302] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:51.501 19:14:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.501 19:14:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:51.501 19:14:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:51.501 19:14:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:51.501 19:14:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:51.502 19:14:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:51.502 19:14:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:51.502 19:14:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:51.502 19:14:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:51.502 19:14:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:51.502 19:14:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:51.502 19:14:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.502 19:14:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.502 19:14:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.502 19:14:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:51.502 19:14:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.502 19:14:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:51.502 "name": "Existed_Raid", 00:15:51.502 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:51.502 "strip_size_kb": 64, 00:15:51.502 "state": "configuring", 00:15:51.502 "raid_level": "raid5f", 00:15:51.502 "superblock": false, 00:15:51.502 "num_base_bdevs": 3, 00:15:51.502 "num_base_bdevs_discovered": 1, 00:15:51.502 "num_base_bdevs_operational": 3, 00:15:51.502 "base_bdevs_list": [ 00:15:51.502 { 00:15:51.502 "name": null, 00:15:51.502 "uuid": "d0382913-3456-4aa8-a11c-81541a6e9251", 00:15:51.502 "is_configured": false, 00:15:51.502 "data_offset": 0, 00:15:51.502 "data_size": 65536 00:15:51.502 }, 00:15:51.502 { 00:15:51.502 "name": null, 00:15:51.502 "uuid": "546a10be-637f-4854-851c-4005134926bb", 00:15:51.502 "is_configured": false, 00:15:51.502 "data_offset": 0, 00:15:51.502 "data_size": 65536 00:15:51.502 }, 00:15:51.502 { 00:15:51.502 "name": "BaseBdev3", 00:15:51.502 "uuid": "19be05b7-f34d-405e-b26d-16f231209b6b", 00:15:51.502 "is_configured": true, 00:15:51.502 "data_offset": 0, 00:15:51.502 "data_size": 65536 00:15:51.502 } 00:15:51.502 ] 00:15:51.502 }' 00:15:51.502 19:14:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:51.502 19:14:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.072 19:14:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:52.072 19:14:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:52.072 19:14:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.072 19:14:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.072 19:14:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.072 19:14:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:15:52.072 19:14:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:15:52.072 19:14:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.072 19:14:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.072 [2024-11-27 19:14:01.489728] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:52.072 19:14:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.072 19:14:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:52.072 19:14:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:52.072 19:14:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:52.072 19:14:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:52.072 19:14:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:52.072 19:14:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:52.072 19:14:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:52.072 19:14:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:52.072 19:14:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:52.072 19:14:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:52.072 19:14:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:52.072 19:14:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:52.072 19:14:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.072 19:14:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.072 19:14:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.072 19:14:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:52.072 "name": "Existed_Raid", 00:15:52.073 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:52.073 "strip_size_kb": 64, 00:15:52.073 "state": "configuring", 00:15:52.073 "raid_level": "raid5f", 00:15:52.073 "superblock": false, 00:15:52.073 "num_base_bdevs": 3, 00:15:52.073 "num_base_bdevs_discovered": 2, 00:15:52.073 "num_base_bdevs_operational": 3, 00:15:52.073 "base_bdevs_list": [ 00:15:52.073 { 00:15:52.073 "name": null, 00:15:52.073 "uuid": "d0382913-3456-4aa8-a11c-81541a6e9251", 00:15:52.073 "is_configured": false, 00:15:52.073 "data_offset": 0, 00:15:52.073 "data_size": 65536 00:15:52.073 }, 00:15:52.073 { 00:15:52.073 "name": "BaseBdev2", 00:15:52.073 "uuid": "546a10be-637f-4854-851c-4005134926bb", 00:15:52.073 "is_configured": true, 00:15:52.073 "data_offset": 0, 00:15:52.073 "data_size": 65536 00:15:52.073 }, 00:15:52.073 { 00:15:52.073 "name": "BaseBdev3", 00:15:52.073 "uuid": "19be05b7-f34d-405e-b26d-16f231209b6b", 00:15:52.073 "is_configured": true, 00:15:52.073 "data_offset": 0, 00:15:52.073 "data_size": 65536 00:15:52.073 } 00:15:52.073 ] 00:15:52.073 }' 00:15:52.073 19:14:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:52.073 19:14:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.332 19:14:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:52.332 19:14:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:52.332 19:14:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.332 19:14:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.332 19:14:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.332 19:14:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:15:52.332 19:14:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:52.332 19:14:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.332 19:14:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.332 19:14:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:52.593 19:14:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.593 19:14:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u d0382913-3456-4aa8-a11c-81541a6e9251 00:15:52.593 19:14:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.593 19:14:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.593 [2024-11-27 19:14:02.050025] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:15:52.593 [2024-11-27 19:14:02.050079] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:52.593 [2024-11-27 19:14:02.050090] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:15:52.593 [2024-11-27 19:14:02.050350] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:15:52.593 [2024-11-27 19:14:02.055553] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:52.593 [2024-11-27 19:14:02.055584] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:15:52.593 [2024-11-27 19:14:02.055874] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:52.593 NewBaseBdev 00:15:52.593 19:14:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.593 19:14:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:15:52.593 19:14:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:15:52.593 19:14:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:52.593 19:14:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:52.593 19:14:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:52.593 19:14:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:52.593 19:14:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:52.593 19:14:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.593 19:14:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.593 19:14:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.593 19:14:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:15:52.593 19:14:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.593 19:14:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.593 [ 00:15:52.593 { 00:15:52.593 "name": "NewBaseBdev", 00:15:52.593 "aliases": [ 00:15:52.593 "d0382913-3456-4aa8-a11c-81541a6e9251" 00:15:52.593 ], 00:15:52.593 "product_name": "Malloc disk", 00:15:52.593 "block_size": 512, 00:15:52.593 "num_blocks": 65536, 00:15:52.593 "uuid": "d0382913-3456-4aa8-a11c-81541a6e9251", 00:15:52.593 "assigned_rate_limits": { 00:15:52.593 "rw_ios_per_sec": 0, 00:15:52.593 "rw_mbytes_per_sec": 0, 00:15:52.593 "r_mbytes_per_sec": 0, 00:15:52.593 "w_mbytes_per_sec": 0 00:15:52.593 }, 00:15:52.593 "claimed": true, 00:15:52.593 "claim_type": "exclusive_write", 00:15:52.593 "zoned": false, 00:15:52.593 "supported_io_types": { 00:15:52.593 "read": true, 00:15:52.593 "write": true, 00:15:52.593 "unmap": true, 00:15:52.593 "flush": true, 00:15:52.593 "reset": true, 00:15:52.593 "nvme_admin": false, 00:15:52.593 "nvme_io": false, 00:15:52.593 "nvme_io_md": false, 00:15:52.593 "write_zeroes": true, 00:15:52.593 "zcopy": true, 00:15:52.593 "get_zone_info": false, 00:15:52.593 "zone_management": false, 00:15:52.593 "zone_append": false, 00:15:52.593 "compare": false, 00:15:52.593 "compare_and_write": false, 00:15:52.593 "abort": true, 00:15:52.593 "seek_hole": false, 00:15:52.593 "seek_data": false, 00:15:52.593 "copy": true, 00:15:52.593 "nvme_iov_md": false 00:15:52.593 }, 00:15:52.593 "memory_domains": [ 00:15:52.593 { 00:15:52.593 "dma_device_id": "system", 00:15:52.593 "dma_device_type": 1 00:15:52.593 }, 00:15:52.593 { 00:15:52.593 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:52.593 "dma_device_type": 2 00:15:52.593 } 00:15:52.593 ], 00:15:52.593 "driver_specific": {} 00:15:52.593 } 00:15:52.593 ] 00:15:52.593 19:14:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.593 19:14:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:52.593 19:14:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:15:52.593 19:14:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:52.593 19:14:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:52.593 19:14:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:52.593 19:14:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:52.593 19:14:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:52.593 19:14:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:52.593 19:14:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:52.593 19:14:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:52.593 19:14:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:52.593 19:14:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:52.593 19:14:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.593 19:14:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:52.593 19:14:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.593 19:14:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.594 19:14:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:52.594 "name": "Existed_Raid", 00:15:52.594 "uuid": "7ad4456f-65d8-4ea1-9cc1-f0d6b037ed89", 00:15:52.594 "strip_size_kb": 64, 00:15:52.594 "state": "online", 00:15:52.594 "raid_level": "raid5f", 00:15:52.594 "superblock": false, 00:15:52.594 "num_base_bdevs": 3, 00:15:52.594 "num_base_bdevs_discovered": 3, 00:15:52.594 "num_base_bdevs_operational": 3, 00:15:52.594 "base_bdevs_list": [ 00:15:52.594 { 00:15:52.594 "name": "NewBaseBdev", 00:15:52.594 "uuid": "d0382913-3456-4aa8-a11c-81541a6e9251", 00:15:52.594 "is_configured": true, 00:15:52.594 "data_offset": 0, 00:15:52.594 "data_size": 65536 00:15:52.594 }, 00:15:52.594 { 00:15:52.594 "name": "BaseBdev2", 00:15:52.594 "uuid": "546a10be-637f-4854-851c-4005134926bb", 00:15:52.594 "is_configured": true, 00:15:52.594 "data_offset": 0, 00:15:52.594 "data_size": 65536 00:15:52.594 }, 00:15:52.594 { 00:15:52.594 "name": "BaseBdev3", 00:15:52.594 "uuid": "19be05b7-f34d-405e-b26d-16f231209b6b", 00:15:52.594 "is_configured": true, 00:15:52.594 "data_offset": 0, 00:15:52.594 "data_size": 65536 00:15:52.594 } 00:15:52.594 ] 00:15:52.594 }' 00:15:52.594 19:14:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:52.594 19:14:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.163 19:14:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:15:53.163 19:14:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:53.163 19:14:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:53.163 19:14:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:53.163 19:14:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:53.163 19:14:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:53.163 19:14:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:53.163 19:14:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:53.163 19:14:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.163 19:14:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.163 [2024-11-27 19:14:02.514355] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:53.163 19:14:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.163 19:14:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:53.163 "name": "Existed_Raid", 00:15:53.163 "aliases": [ 00:15:53.163 "7ad4456f-65d8-4ea1-9cc1-f0d6b037ed89" 00:15:53.163 ], 00:15:53.164 "product_name": "Raid Volume", 00:15:53.164 "block_size": 512, 00:15:53.164 "num_blocks": 131072, 00:15:53.164 "uuid": "7ad4456f-65d8-4ea1-9cc1-f0d6b037ed89", 00:15:53.164 "assigned_rate_limits": { 00:15:53.164 "rw_ios_per_sec": 0, 00:15:53.164 "rw_mbytes_per_sec": 0, 00:15:53.164 "r_mbytes_per_sec": 0, 00:15:53.164 "w_mbytes_per_sec": 0 00:15:53.164 }, 00:15:53.164 "claimed": false, 00:15:53.164 "zoned": false, 00:15:53.164 "supported_io_types": { 00:15:53.164 "read": true, 00:15:53.164 "write": true, 00:15:53.164 "unmap": false, 00:15:53.164 "flush": false, 00:15:53.164 "reset": true, 00:15:53.164 "nvme_admin": false, 00:15:53.164 "nvme_io": false, 00:15:53.164 "nvme_io_md": false, 00:15:53.164 "write_zeroes": true, 00:15:53.164 "zcopy": false, 00:15:53.164 "get_zone_info": false, 00:15:53.164 "zone_management": false, 00:15:53.164 "zone_append": false, 00:15:53.164 "compare": false, 00:15:53.164 "compare_and_write": false, 00:15:53.164 "abort": false, 00:15:53.164 "seek_hole": false, 00:15:53.164 "seek_data": false, 00:15:53.164 "copy": false, 00:15:53.164 "nvme_iov_md": false 00:15:53.164 }, 00:15:53.164 "driver_specific": { 00:15:53.164 "raid": { 00:15:53.164 "uuid": "7ad4456f-65d8-4ea1-9cc1-f0d6b037ed89", 00:15:53.164 "strip_size_kb": 64, 00:15:53.164 "state": "online", 00:15:53.164 "raid_level": "raid5f", 00:15:53.164 "superblock": false, 00:15:53.164 "num_base_bdevs": 3, 00:15:53.164 "num_base_bdevs_discovered": 3, 00:15:53.164 "num_base_bdevs_operational": 3, 00:15:53.164 "base_bdevs_list": [ 00:15:53.164 { 00:15:53.164 "name": "NewBaseBdev", 00:15:53.164 "uuid": "d0382913-3456-4aa8-a11c-81541a6e9251", 00:15:53.164 "is_configured": true, 00:15:53.164 "data_offset": 0, 00:15:53.164 "data_size": 65536 00:15:53.164 }, 00:15:53.164 { 00:15:53.164 "name": "BaseBdev2", 00:15:53.164 "uuid": "546a10be-637f-4854-851c-4005134926bb", 00:15:53.164 "is_configured": true, 00:15:53.164 "data_offset": 0, 00:15:53.164 "data_size": 65536 00:15:53.164 }, 00:15:53.164 { 00:15:53.164 "name": "BaseBdev3", 00:15:53.164 "uuid": "19be05b7-f34d-405e-b26d-16f231209b6b", 00:15:53.164 "is_configured": true, 00:15:53.164 "data_offset": 0, 00:15:53.164 "data_size": 65536 00:15:53.164 } 00:15:53.164 ] 00:15:53.164 } 00:15:53.164 } 00:15:53.164 }' 00:15:53.164 19:14:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:53.164 19:14:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:15:53.164 BaseBdev2 00:15:53.164 BaseBdev3' 00:15:53.164 19:14:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:53.164 19:14:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:53.164 19:14:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:53.164 19:14:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:15:53.164 19:14:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:53.164 19:14:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.164 19:14:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.164 19:14:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.164 19:14:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:53.164 19:14:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:53.164 19:14:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:53.164 19:14:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:53.164 19:14:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:53.164 19:14:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.164 19:14:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.164 19:14:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.164 19:14:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:53.164 19:14:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:53.164 19:14:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:53.164 19:14:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:53.164 19:14:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:53.164 19:14:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.164 19:14:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.164 19:14:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.164 19:14:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:53.164 19:14:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:53.164 19:14:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:53.164 19:14:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.164 19:14:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.164 [2024-11-27 19:14:02.749759] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:53.164 [2024-11-27 19:14:02.749786] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:53.164 [2024-11-27 19:14:02.749865] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:53.164 [2024-11-27 19:14:02.750190] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:53.164 [2024-11-27 19:14:02.750209] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:15:53.164 19:14:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.164 19:14:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 79960 00:15:53.164 19:14:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 79960 ']' 00:15:53.164 19:14:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # kill -0 79960 00:15:53.164 19:14:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # uname 00:15:53.164 19:14:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:53.164 19:14:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79960 00:15:53.164 killing process with pid 79960 00:15:53.164 19:14:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:53.164 19:14:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:53.164 19:14:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79960' 00:15:53.164 19:14:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@973 -- # kill 79960 00:15:53.164 [2024-11-27 19:14:02.797253] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:53.164 19:14:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@978 -- # wait 79960 00:15:53.733 [2024-11-27 19:14:03.121603] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:55.114 19:14:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:15:55.114 00:15:55.114 real 0m10.667s 00:15:55.114 user 0m16.610s 00:15:55.114 sys 0m2.108s 00:15:55.114 19:14:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:55.114 19:14:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.114 ************************************ 00:15:55.114 END TEST raid5f_state_function_test 00:15:55.114 ************************************ 00:15:55.114 19:14:04 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 3 true 00:15:55.114 19:14:04 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:55.114 19:14:04 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:55.114 19:14:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:55.114 ************************************ 00:15:55.114 START TEST raid5f_state_function_test_sb 00:15:55.114 ************************************ 00:15:55.114 19:14:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 3 true 00:15:55.114 19:14:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:15:55.114 19:14:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:15:55.114 19:14:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:15:55.114 19:14:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:15:55.114 19:14:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:15:55.114 19:14:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:55.114 19:14:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:15:55.114 19:14:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:55.114 19:14:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:55.114 19:14:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:15:55.114 19:14:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:55.114 19:14:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:55.114 19:14:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:15:55.114 19:14:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:55.114 19:14:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:55.114 19:14:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:15:55.114 19:14:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:15:55.114 19:14:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:15:55.114 19:14:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:15:55.114 19:14:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:15:55.114 19:14:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:15:55.114 19:14:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:15:55.114 19:14:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:15:55.114 19:14:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:15:55.114 19:14:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:15:55.114 19:14:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:15:55.114 19:14:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=80587 00:15:55.114 19:14:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:15:55.114 19:14:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80587' 00:15:55.114 Process raid pid: 80587 00:15:55.114 19:14:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 80587 00:15:55.114 19:14:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 80587 ']' 00:15:55.114 19:14:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:55.114 19:14:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:55.114 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:55.114 19:14:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:55.114 19:14:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:55.114 19:14:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.114 [2024-11-27 19:14:04.526311] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:15:55.114 [2024-11-27 19:14:04.526434] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:55.114 [2024-11-27 19:14:04.706874] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:55.374 [2024-11-27 19:14:04.847857] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:55.634 [2024-11-27 19:14:05.089468] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:55.634 [2024-11-27 19:14:05.089507] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:55.893 19:14:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:55.893 19:14:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:15:55.893 19:14:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:55.893 19:14:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.893 19:14:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.893 [2024-11-27 19:14:05.370370] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:55.893 [2024-11-27 19:14:05.370436] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:55.893 [2024-11-27 19:14:05.370452] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:55.893 [2024-11-27 19:14:05.370463] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:55.893 [2024-11-27 19:14:05.370469] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:55.893 [2024-11-27 19:14:05.370479] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:55.893 19:14:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.893 19:14:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:55.894 19:14:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:55.894 19:14:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:55.894 19:14:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:55.894 19:14:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:55.894 19:14:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:55.894 19:14:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:55.894 19:14:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:55.894 19:14:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:55.894 19:14:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:55.894 19:14:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:55.894 19:14:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:55.894 19:14:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.894 19:14:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.894 19:14:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.894 19:14:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:55.894 "name": "Existed_Raid", 00:15:55.894 "uuid": "688de97d-0eee-4e28-a52e-d176b75baa47", 00:15:55.894 "strip_size_kb": 64, 00:15:55.894 "state": "configuring", 00:15:55.894 "raid_level": "raid5f", 00:15:55.894 "superblock": true, 00:15:55.894 "num_base_bdevs": 3, 00:15:55.894 "num_base_bdevs_discovered": 0, 00:15:55.894 "num_base_bdevs_operational": 3, 00:15:55.894 "base_bdevs_list": [ 00:15:55.894 { 00:15:55.894 "name": "BaseBdev1", 00:15:55.894 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:55.894 "is_configured": false, 00:15:55.894 "data_offset": 0, 00:15:55.894 "data_size": 0 00:15:55.894 }, 00:15:55.894 { 00:15:55.894 "name": "BaseBdev2", 00:15:55.894 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:55.894 "is_configured": false, 00:15:55.894 "data_offset": 0, 00:15:55.894 "data_size": 0 00:15:55.894 }, 00:15:55.894 { 00:15:55.894 "name": "BaseBdev3", 00:15:55.894 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:55.894 "is_configured": false, 00:15:55.894 "data_offset": 0, 00:15:55.894 "data_size": 0 00:15:55.894 } 00:15:55.894 ] 00:15:55.894 }' 00:15:55.894 19:14:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:55.894 19:14:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:56.470 19:14:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:56.470 19:14:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.470 19:14:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:56.470 [2024-11-27 19:14:05.829531] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:56.470 [2024-11-27 19:14:05.829580] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:15:56.470 19:14:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.470 19:14:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:56.470 19:14:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.470 19:14:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:56.470 [2024-11-27 19:14:05.841508] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:56.470 [2024-11-27 19:14:05.841558] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:56.470 [2024-11-27 19:14:05.841567] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:56.470 [2024-11-27 19:14:05.841577] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:56.470 [2024-11-27 19:14:05.841583] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:56.470 [2024-11-27 19:14:05.841593] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:56.470 19:14:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.470 19:14:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:56.470 19:14:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.470 19:14:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:56.470 [2024-11-27 19:14:05.897123] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:56.470 BaseBdev1 00:15:56.470 19:14:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.470 19:14:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:15:56.471 19:14:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:56.471 19:14:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:56.471 19:14:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:56.471 19:14:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:56.471 19:14:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:56.471 19:14:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:56.471 19:14:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.471 19:14:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:56.471 19:14:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.471 19:14:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:56.471 19:14:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.471 19:14:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:56.471 [ 00:15:56.471 { 00:15:56.471 "name": "BaseBdev1", 00:15:56.471 "aliases": [ 00:15:56.471 "c9a552e2-7687-47c6-bb82-29855da1ac18" 00:15:56.471 ], 00:15:56.471 "product_name": "Malloc disk", 00:15:56.471 "block_size": 512, 00:15:56.471 "num_blocks": 65536, 00:15:56.471 "uuid": "c9a552e2-7687-47c6-bb82-29855da1ac18", 00:15:56.471 "assigned_rate_limits": { 00:15:56.471 "rw_ios_per_sec": 0, 00:15:56.471 "rw_mbytes_per_sec": 0, 00:15:56.471 "r_mbytes_per_sec": 0, 00:15:56.471 "w_mbytes_per_sec": 0 00:15:56.471 }, 00:15:56.471 "claimed": true, 00:15:56.471 "claim_type": "exclusive_write", 00:15:56.471 "zoned": false, 00:15:56.471 "supported_io_types": { 00:15:56.471 "read": true, 00:15:56.471 "write": true, 00:15:56.471 "unmap": true, 00:15:56.471 "flush": true, 00:15:56.471 "reset": true, 00:15:56.471 "nvme_admin": false, 00:15:56.471 "nvme_io": false, 00:15:56.471 "nvme_io_md": false, 00:15:56.471 "write_zeroes": true, 00:15:56.471 "zcopy": true, 00:15:56.471 "get_zone_info": false, 00:15:56.471 "zone_management": false, 00:15:56.471 "zone_append": false, 00:15:56.471 "compare": false, 00:15:56.471 "compare_and_write": false, 00:15:56.471 "abort": true, 00:15:56.471 "seek_hole": false, 00:15:56.471 "seek_data": false, 00:15:56.471 "copy": true, 00:15:56.471 "nvme_iov_md": false 00:15:56.471 }, 00:15:56.471 "memory_domains": [ 00:15:56.471 { 00:15:56.471 "dma_device_id": "system", 00:15:56.471 "dma_device_type": 1 00:15:56.471 }, 00:15:56.471 { 00:15:56.471 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:56.471 "dma_device_type": 2 00:15:56.471 } 00:15:56.471 ], 00:15:56.471 "driver_specific": {} 00:15:56.471 } 00:15:56.471 ] 00:15:56.471 19:14:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.471 19:14:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:56.471 19:14:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:56.471 19:14:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:56.471 19:14:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:56.471 19:14:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:56.471 19:14:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:56.471 19:14:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:56.471 19:14:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:56.471 19:14:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:56.471 19:14:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:56.471 19:14:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:56.471 19:14:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:56.471 19:14:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.471 19:14:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:56.471 19:14:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:56.471 19:14:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.471 19:14:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:56.471 "name": "Existed_Raid", 00:15:56.471 "uuid": "0c80ba1c-7466-43f7-a204-74fe62eb7500", 00:15:56.471 "strip_size_kb": 64, 00:15:56.471 "state": "configuring", 00:15:56.471 "raid_level": "raid5f", 00:15:56.471 "superblock": true, 00:15:56.471 "num_base_bdevs": 3, 00:15:56.471 "num_base_bdevs_discovered": 1, 00:15:56.471 "num_base_bdevs_operational": 3, 00:15:56.471 "base_bdevs_list": [ 00:15:56.471 { 00:15:56.471 "name": "BaseBdev1", 00:15:56.471 "uuid": "c9a552e2-7687-47c6-bb82-29855da1ac18", 00:15:56.471 "is_configured": true, 00:15:56.471 "data_offset": 2048, 00:15:56.471 "data_size": 63488 00:15:56.471 }, 00:15:56.471 { 00:15:56.471 "name": "BaseBdev2", 00:15:56.471 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:56.471 "is_configured": false, 00:15:56.471 "data_offset": 0, 00:15:56.471 "data_size": 0 00:15:56.471 }, 00:15:56.471 { 00:15:56.471 "name": "BaseBdev3", 00:15:56.471 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:56.471 "is_configured": false, 00:15:56.471 "data_offset": 0, 00:15:56.471 "data_size": 0 00:15:56.471 } 00:15:56.471 ] 00:15:56.471 }' 00:15:56.471 19:14:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:56.471 19:14:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.040 19:14:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:57.040 19:14:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.040 19:14:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.040 [2024-11-27 19:14:06.396352] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:57.040 [2024-11-27 19:14:06.396421] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:15:57.040 19:14:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.040 19:14:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:57.040 19:14:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.040 19:14:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.040 [2024-11-27 19:14:06.408395] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:57.040 [2024-11-27 19:14:06.410583] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:57.040 [2024-11-27 19:14:06.410631] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:57.040 [2024-11-27 19:14:06.410641] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:57.040 [2024-11-27 19:14:06.410649] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:57.040 19:14:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.040 19:14:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:15:57.040 19:14:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:57.040 19:14:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:57.040 19:14:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:57.040 19:14:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:57.040 19:14:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:57.040 19:14:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:57.040 19:14:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:57.040 19:14:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:57.040 19:14:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:57.040 19:14:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:57.040 19:14:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:57.040 19:14:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:57.040 19:14:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:57.040 19:14:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.041 19:14:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.041 19:14:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.041 19:14:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:57.041 "name": "Existed_Raid", 00:15:57.041 "uuid": "39c8a746-6088-47ac-a04b-258587f6463e", 00:15:57.041 "strip_size_kb": 64, 00:15:57.041 "state": "configuring", 00:15:57.041 "raid_level": "raid5f", 00:15:57.041 "superblock": true, 00:15:57.041 "num_base_bdevs": 3, 00:15:57.041 "num_base_bdevs_discovered": 1, 00:15:57.041 "num_base_bdevs_operational": 3, 00:15:57.041 "base_bdevs_list": [ 00:15:57.041 { 00:15:57.041 "name": "BaseBdev1", 00:15:57.041 "uuid": "c9a552e2-7687-47c6-bb82-29855da1ac18", 00:15:57.041 "is_configured": true, 00:15:57.041 "data_offset": 2048, 00:15:57.041 "data_size": 63488 00:15:57.041 }, 00:15:57.041 { 00:15:57.041 "name": "BaseBdev2", 00:15:57.041 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:57.041 "is_configured": false, 00:15:57.041 "data_offset": 0, 00:15:57.041 "data_size": 0 00:15:57.041 }, 00:15:57.041 { 00:15:57.041 "name": "BaseBdev3", 00:15:57.041 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:57.041 "is_configured": false, 00:15:57.041 "data_offset": 0, 00:15:57.041 "data_size": 0 00:15:57.041 } 00:15:57.041 ] 00:15:57.041 }' 00:15:57.041 19:14:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:57.041 19:14:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.301 19:14:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:57.301 19:14:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.301 19:14:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.301 [2024-11-27 19:14:06.878511] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:57.301 BaseBdev2 00:15:57.301 19:14:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.301 19:14:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:15:57.301 19:14:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:57.301 19:14:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:57.301 19:14:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:57.301 19:14:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:57.301 19:14:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:57.301 19:14:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:57.301 19:14:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.301 19:14:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.301 19:14:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.301 19:14:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:57.301 19:14:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.301 19:14:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.301 [ 00:15:57.301 { 00:15:57.301 "name": "BaseBdev2", 00:15:57.301 "aliases": [ 00:15:57.301 "b56a28ad-8af5-4749-8c79-496b5700efaa" 00:15:57.301 ], 00:15:57.301 "product_name": "Malloc disk", 00:15:57.301 "block_size": 512, 00:15:57.301 "num_blocks": 65536, 00:15:57.301 "uuid": "b56a28ad-8af5-4749-8c79-496b5700efaa", 00:15:57.301 "assigned_rate_limits": { 00:15:57.301 "rw_ios_per_sec": 0, 00:15:57.301 "rw_mbytes_per_sec": 0, 00:15:57.301 "r_mbytes_per_sec": 0, 00:15:57.301 "w_mbytes_per_sec": 0 00:15:57.301 }, 00:15:57.301 "claimed": true, 00:15:57.301 "claim_type": "exclusive_write", 00:15:57.301 "zoned": false, 00:15:57.301 "supported_io_types": { 00:15:57.301 "read": true, 00:15:57.301 "write": true, 00:15:57.301 "unmap": true, 00:15:57.301 "flush": true, 00:15:57.301 "reset": true, 00:15:57.301 "nvme_admin": false, 00:15:57.301 "nvme_io": false, 00:15:57.301 "nvme_io_md": false, 00:15:57.301 "write_zeroes": true, 00:15:57.301 "zcopy": true, 00:15:57.301 "get_zone_info": false, 00:15:57.301 "zone_management": false, 00:15:57.301 "zone_append": false, 00:15:57.301 "compare": false, 00:15:57.301 "compare_and_write": false, 00:15:57.301 "abort": true, 00:15:57.301 "seek_hole": false, 00:15:57.301 "seek_data": false, 00:15:57.301 "copy": true, 00:15:57.301 "nvme_iov_md": false 00:15:57.301 }, 00:15:57.301 "memory_domains": [ 00:15:57.301 { 00:15:57.301 "dma_device_id": "system", 00:15:57.301 "dma_device_type": 1 00:15:57.301 }, 00:15:57.301 { 00:15:57.301 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:57.301 "dma_device_type": 2 00:15:57.301 } 00:15:57.301 ], 00:15:57.301 "driver_specific": {} 00:15:57.301 } 00:15:57.301 ] 00:15:57.301 19:14:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.301 19:14:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:57.301 19:14:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:57.301 19:14:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:57.301 19:14:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:57.301 19:14:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:57.301 19:14:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:57.301 19:14:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:57.301 19:14:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:57.301 19:14:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:57.301 19:14:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:57.301 19:14:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:57.301 19:14:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:57.301 19:14:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:57.301 19:14:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:57.301 19:14:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:57.301 19:14:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.301 19:14:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.561 19:14:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.561 19:14:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:57.561 "name": "Existed_Raid", 00:15:57.561 "uuid": "39c8a746-6088-47ac-a04b-258587f6463e", 00:15:57.561 "strip_size_kb": 64, 00:15:57.561 "state": "configuring", 00:15:57.561 "raid_level": "raid5f", 00:15:57.561 "superblock": true, 00:15:57.561 "num_base_bdevs": 3, 00:15:57.561 "num_base_bdevs_discovered": 2, 00:15:57.561 "num_base_bdevs_operational": 3, 00:15:57.561 "base_bdevs_list": [ 00:15:57.561 { 00:15:57.561 "name": "BaseBdev1", 00:15:57.561 "uuid": "c9a552e2-7687-47c6-bb82-29855da1ac18", 00:15:57.561 "is_configured": true, 00:15:57.561 "data_offset": 2048, 00:15:57.561 "data_size": 63488 00:15:57.561 }, 00:15:57.561 { 00:15:57.561 "name": "BaseBdev2", 00:15:57.561 "uuid": "b56a28ad-8af5-4749-8c79-496b5700efaa", 00:15:57.561 "is_configured": true, 00:15:57.561 "data_offset": 2048, 00:15:57.561 "data_size": 63488 00:15:57.561 }, 00:15:57.561 { 00:15:57.561 "name": "BaseBdev3", 00:15:57.561 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:57.561 "is_configured": false, 00:15:57.561 "data_offset": 0, 00:15:57.561 "data_size": 0 00:15:57.561 } 00:15:57.561 ] 00:15:57.561 }' 00:15:57.561 19:14:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:57.561 19:14:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.821 19:14:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:57.821 19:14:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.821 19:14:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.821 [2024-11-27 19:14:07.374481] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:57.821 [2024-11-27 19:14:07.374877] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:57.821 [2024-11-27 19:14:07.374944] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:57.821 [2024-11-27 19:14:07.375264] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:57.821 BaseBdev3 00:15:57.821 19:14:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.821 19:14:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:15:57.821 19:14:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:57.821 19:14:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:57.821 19:14:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:57.821 19:14:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:57.821 19:14:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:57.821 19:14:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:57.821 19:14:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.821 19:14:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.821 [2024-11-27 19:14:07.380963] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:57.821 [2024-11-27 19:14:07.381038] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:15:57.821 [2024-11-27 19:14:07.381283] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:57.821 19:14:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.821 19:14:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:57.821 19:14:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.821 19:14:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.821 [ 00:15:57.821 { 00:15:57.821 "name": "BaseBdev3", 00:15:57.821 "aliases": [ 00:15:57.821 "a00a2c32-534c-4ba8-9805-c1907a8339c5" 00:15:57.821 ], 00:15:57.821 "product_name": "Malloc disk", 00:15:57.821 "block_size": 512, 00:15:57.821 "num_blocks": 65536, 00:15:57.821 "uuid": "a00a2c32-534c-4ba8-9805-c1907a8339c5", 00:15:57.821 "assigned_rate_limits": { 00:15:57.821 "rw_ios_per_sec": 0, 00:15:57.821 "rw_mbytes_per_sec": 0, 00:15:57.821 "r_mbytes_per_sec": 0, 00:15:57.821 "w_mbytes_per_sec": 0 00:15:57.821 }, 00:15:57.821 "claimed": true, 00:15:57.821 "claim_type": "exclusive_write", 00:15:57.821 "zoned": false, 00:15:57.821 "supported_io_types": { 00:15:57.821 "read": true, 00:15:57.821 "write": true, 00:15:57.821 "unmap": true, 00:15:57.821 "flush": true, 00:15:57.821 "reset": true, 00:15:57.821 "nvme_admin": false, 00:15:57.821 "nvme_io": false, 00:15:57.821 "nvme_io_md": false, 00:15:57.821 "write_zeroes": true, 00:15:57.821 "zcopy": true, 00:15:57.821 "get_zone_info": false, 00:15:57.821 "zone_management": false, 00:15:57.821 "zone_append": false, 00:15:57.821 "compare": false, 00:15:57.821 "compare_and_write": false, 00:15:57.821 "abort": true, 00:15:57.821 "seek_hole": false, 00:15:57.821 "seek_data": false, 00:15:57.821 "copy": true, 00:15:57.821 "nvme_iov_md": false 00:15:57.821 }, 00:15:57.821 "memory_domains": [ 00:15:57.821 { 00:15:57.821 "dma_device_id": "system", 00:15:57.821 "dma_device_type": 1 00:15:57.821 }, 00:15:57.821 { 00:15:57.821 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:57.821 "dma_device_type": 2 00:15:57.821 } 00:15:57.821 ], 00:15:57.821 "driver_specific": {} 00:15:57.821 } 00:15:57.821 ] 00:15:57.821 19:14:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.821 19:14:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:57.821 19:14:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:57.821 19:14:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:57.821 19:14:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:15:57.821 19:14:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:57.821 19:14:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:57.821 19:14:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:57.821 19:14:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:57.821 19:14:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:57.821 19:14:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:57.821 19:14:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:57.821 19:14:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:57.821 19:14:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:57.821 19:14:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:57.821 19:14:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:57.821 19:14:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.821 19:14:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.821 19:14:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.081 19:14:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:58.081 "name": "Existed_Raid", 00:15:58.081 "uuid": "39c8a746-6088-47ac-a04b-258587f6463e", 00:15:58.081 "strip_size_kb": 64, 00:15:58.081 "state": "online", 00:15:58.081 "raid_level": "raid5f", 00:15:58.081 "superblock": true, 00:15:58.081 "num_base_bdevs": 3, 00:15:58.081 "num_base_bdevs_discovered": 3, 00:15:58.081 "num_base_bdevs_operational": 3, 00:15:58.081 "base_bdevs_list": [ 00:15:58.081 { 00:15:58.081 "name": "BaseBdev1", 00:15:58.081 "uuid": "c9a552e2-7687-47c6-bb82-29855da1ac18", 00:15:58.081 "is_configured": true, 00:15:58.081 "data_offset": 2048, 00:15:58.081 "data_size": 63488 00:15:58.081 }, 00:15:58.081 { 00:15:58.081 "name": "BaseBdev2", 00:15:58.081 "uuid": "b56a28ad-8af5-4749-8c79-496b5700efaa", 00:15:58.081 "is_configured": true, 00:15:58.081 "data_offset": 2048, 00:15:58.081 "data_size": 63488 00:15:58.081 }, 00:15:58.081 { 00:15:58.081 "name": "BaseBdev3", 00:15:58.081 "uuid": "a00a2c32-534c-4ba8-9805-c1907a8339c5", 00:15:58.081 "is_configured": true, 00:15:58.081 "data_offset": 2048, 00:15:58.081 "data_size": 63488 00:15:58.081 } 00:15:58.081 ] 00:15:58.081 }' 00:15:58.081 19:14:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:58.081 19:14:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:58.341 19:14:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:15:58.341 19:14:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:58.341 19:14:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:58.341 19:14:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:58.341 19:14:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:15:58.341 19:14:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:58.341 19:14:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:58.341 19:14:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:58.341 19:14:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.341 19:14:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:58.341 [2024-11-27 19:14:07.864192] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:58.341 19:14:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.341 19:14:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:58.341 "name": "Existed_Raid", 00:15:58.341 "aliases": [ 00:15:58.341 "39c8a746-6088-47ac-a04b-258587f6463e" 00:15:58.341 ], 00:15:58.341 "product_name": "Raid Volume", 00:15:58.341 "block_size": 512, 00:15:58.341 "num_blocks": 126976, 00:15:58.341 "uuid": "39c8a746-6088-47ac-a04b-258587f6463e", 00:15:58.341 "assigned_rate_limits": { 00:15:58.341 "rw_ios_per_sec": 0, 00:15:58.341 "rw_mbytes_per_sec": 0, 00:15:58.341 "r_mbytes_per_sec": 0, 00:15:58.341 "w_mbytes_per_sec": 0 00:15:58.341 }, 00:15:58.341 "claimed": false, 00:15:58.341 "zoned": false, 00:15:58.341 "supported_io_types": { 00:15:58.341 "read": true, 00:15:58.341 "write": true, 00:15:58.341 "unmap": false, 00:15:58.341 "flush": false, 00:15:58.341 "reset": true, 00:15:58.341 "nvme_admin": false, 00:15:58.341 "nvme_io": false, 00:15:58.341 "nvme_io_md": false, 00:15:58.341 "write_zeroes": true, 00:15:58.341 "zcopy": false, 00:15:58.341 "get_zone_info": false, 00:15:58.341 "zone_management": false, 00:15:58.341 "zone_append": false, 00:15:58.341 "compare": false, 00:15:58.341 "compare_and_write": false, 00:15:58.341 "abort": false, 00:15:58.341 "seek_hole": false, 00:15:58.341 "seek_data": false, 00:15:58.341 "copy": false, 00:15:58.341 "nvme_iov_md": false 00:15:58.341 }, 00:15:58.341 "driver_specific": { 00:15:58.341 "raid": { 00:15:58.341 "uuid": "39c8a746-6088-47ac-a04b-258587f6463e", 00:15:58.341 "strip_size_kb": 64, 00:15:58.341 "state": "online", 00:15:58.341 "raid_level": "raid5f", 00:15:58.341 "superblock": true, 00:15:58.341 "num_base_bdevs": 3, 00:15:58.341 "num_base_bdevs_discovered": 3, 00:15:58.341 "num_base_bdevs_operational": 3, 00:15:58.341 "base_bdevs_list": [ 00:15:58.341 { 00:15:58.341 "name": "BaseBdev1", 00:15:58.341 "uuid": "c9a552e2-7687-47c6-bb82-29855da1ac18", 00:15:58.341 "is_configured": true, 00:15:58.341 "data_offset": 2048, 00:15:58.341 "data_size": 63488 00:15:58.341 }, 00:15:58.341 { 00:15:58.341 "name": "BaseBdev2", 00:15:58.341 "uuid": "b56a28ad-8af5-4749-8c79-496b5700efaa", 00:15:58.341 "is_configured": true, 00:15:58.341 "data_offset": 2048, 00:15:58.341 "data_size": 63488 00:15:58.341 }, 00:15:58.341 { 00:15:58.341 "name": "BaseBdev3", 00:15:58.341 "uuid": "a00a2c32-534c-4ba8-9805-c1907a8339c5", 00:15:58.341 "is_configured": true, 00:15:58.341 "data_offset": 2048, 00:15:58.341 "data_size": 63488 00:15:58.341 } 00:15:58.341 ] 00:15:58.341 } 00:15:58.341 } 00:15:58.341 }' 00:15:58.341 19:14:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:58.341 19:14:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:15:58.341 BaseBdev2 00:15:58.341 BaseBdev3' 00:15:58.341 19:14:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:58.602 19:14:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:58.602 19:14:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:58.602 19:14:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:58.602 19:14:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:15:58.602 19:14:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.602 19:14:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:58.602 19:14:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.602 19:14:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:58.602 19:14:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:58.602 19:14:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:58.602 19:14:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:58.602 19:14:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:58.602 19:14:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.602 19:14:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:58.602 19:14:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.602 19:14:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:58.602 19:14:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:58.602 19:14:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:58.602 19:14:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:58.602 19:14:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:58.602 19:14:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.602 19:14:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:58.602 19:14:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.602 19:14:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:58.602 19:14:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:58.602 19:14:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:58.602 19:14:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.602 19:14:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:58.602 [2024-11-27 19:14:08.123672] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:58.602 19:14:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.602 19:14:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:15:58.602 19:14:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:15:58.602 19:14:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:58.602 19:14:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:15:58.602 19:14:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:15:58.602 19:14:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:15:58.602 19:14:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:58.602 19:14:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:58.602 19:14:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:58.602 19:14:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:58.602 19:14:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:58.602 19:14:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:58.602 19:14:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:58.602 19:14:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:58.602 19:14:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:58.602 19:14:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:58.602 19:14:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.602 19:14:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:58.602 19:14:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:58.862 19:14:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.862 19:14:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:58.862 "name": "Existed_Raid", 00:15:58.862 "uuid": "39c8a746-6088-47ac-a04b-258587f6463e", 00:15:58.862 "strip_size_kb": 64, 00:15:58.862 "state": "online", 00:15:58.862 "raid_level": "raid5f", 00:15:58.862 "superblock": true, 00:15:58.862 "num_base_bdevs": 3, 00:15:58.862 "num_base_bdevs_discovered": 2, 00:15:58.862 "num_base_bdevs_operational": 2, 00:15:58.862 "base_bdevs_list": [ 00:15:58.862 { 00:15:58.862 "name": null, 00:15:58.862 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:58.862 "is_configured": false, 00:15:58.862 "data_offset": 0, 00:15:58.862 "data_size": 63488 00:15:58.862 }, 00:15:58.862 { 00:15:58.862 "name": "BaseBdev2", 00:15:58.862 "uuid": "b56a28ad-8af5-4749-8c79-496b5700efaa", 00:15:58.862 "is_configured": true, 00:15:58.862 "data_offset": 2048, 00:15:58.862 "data_size": 63488 00:15:58.862 }, 00:15:58.862 { 00:15:58.862 "name": "BaseBdev3", 00:15:58.862 "uuid": "a00a2c32-534c-4ba8-9805-c1907a8339c5", 00:15:58.862 "is_configured": true, 00:15:58.862 "data_offset": 2048, 00:15:58.862 "data_size": 63488 00:15:58.862 } 00:15:58.862 ] 00:15:58.862 }' 00:15:58.862 19:14:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:58.862 19:14:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.122 19:14:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:15:59.122 19:14:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:59.122 19:14:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.122 19:14:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.122 19:14:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:59.122 19:14:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.122 19:14:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.122 19:14:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:59.122 19:14:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:59.122 19:14:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:15:59.122 19:14:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.122 19:14:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.122 [2024-11-27 19:14:08.719214] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:59.122 [2024-11-27 19:14:08.719471] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:59.382 [2024-11-27 19:14:08.820049] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:59.382 19:14:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.383 19:14:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:59.383 19:14:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:59.383 19:14:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.383 19:14:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.383 19:14:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.383 19:14:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:59.383 19:14:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.383 19:14:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:59.383 19:14:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:59.383 19:14:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:15:59.383 19:14:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.383 19:14:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.383 [2024-11-27 19:14:08.860025] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:59.383 [2024-11-27 19:14:08.860146] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:15:59.383 19:14:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.383 19:14:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:59.383 19:14:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:59.383 19:14:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:15:59.383 19:14:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.383 19:14:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.383 19:14:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.383 19:14:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.383 19:14:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:15:59.383 19:14:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:15:59.383 19:14:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:15:59.383 19:14:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:15:59.383 19:14:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:59.383 19:14:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:59.383 19:14:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.383 19:14:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.645 BaseBdev2 00:15:59.645 19:14:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.645 19:14:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:15:59.645 19:14:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:59.645 19:14:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:59.645 19:14:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:59.645 19:14:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:59.645 19:14:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:59.645 19:14:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:59.645 19:14:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.645 19:14:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.645 19:14:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.645 19:14:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:59.645 19:14:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.645 19:14:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.645 [ 00:15:59.645 { 00:15:59.645 "name": "BaseBdev2", 00:15:59.645 "aliases": [ 00:15:59.645 "fa5794bf-0cff-4d58-aa47-8e4da099e39f" 00:15:59.645 ], 00:15:59.645 "product_name": "Malloc disk", 00:15:59.645 "block_size": 512, 00:15:59.645 "num_blocks": 65536, 00:15:59.645 "uuid": "fa5794bf-0cff-4d58-aa47-8e4da099e39f", 00:15:59.645 "assigned_rate_limits": { 00:15:59.645 "rw_ios_per_sec": 0, 00:15:59.645 "rw_mbytes_per_sec": 0, 00:15:59.645 "r_mbytes_per_sec": 0, 00:15:59.645 "w_mbytes_per_sec": 0 00:15:59.645 }, 00:15:59.645 "claimed": false, 00:15:59.645 "zoned": false, 00:15:59.645 "supported_io_types": { 00:15:59.645 "read": true, 00:15:59.645 "write": true, 00:15:59.645 "unmap": true, 00:15:59.645 "flush": true, 00:15:59.645 "reset": true, 00:15:59.645 "nvme_admin": false, 00:15:59.645 "nvme_io": false, 00:15:59.645 "nvme_io_md": false, 00:15:59.645 "write_zeroes": true, 00:15:59.645 "zcopy": true, 00:15:59.645 "get_zone_info": false, 00:15:59.645 "zone_management": false, 00:15:59.645 "zone_append": false, 00:15:59.645 "compare": false, 00:15:59.645 "compare_and_write": false, 00:15:59.645 "abort": true, 00:15:59.645 "seek_hole": false, 00:15:59.645 "seek_data": false, 00:15:59.645 "copy": true, 00:15:59.645 "nvme_iov_md": false 00:15:59.645 }, 00:15:59.645 "memory_domains": [ 00:15:59.645 { 00:15:59.645 "dma_device_id": "system", 00:15:59.645 "dma_device_type": 1 00:15:59.645 }, 00:15:59.645 { 00:15:59.645 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:59.645 "dma_device_type": 2 00:15:59.645 } 00:15:59.645 ], 00:15:59.645 "driver_specific": {} 00:15:59.645 } 00:15:59.645 ] 00:15:59.645 19:14:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.645 19:14:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:59.645 19:14:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:59.645 19:14:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:59.645 19:14:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:59.645 19:14:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.645 19:14:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.645 BaseBdev3 00:15:59.645 19:14:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.645 19:14:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:15:59.645 19:14:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:59.645 19:14:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:59.645 19:14:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:59.645 19:14:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:59.645 19:14:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:59.645 19:14:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:59.645 19:14:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.645 19:14:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.645 19:14:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.645 19:14:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:59.645 19:14:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.645 19:14:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.645 [ 00:15:59.645 { 00:15:59.645 "name": "BaseBdev3", 00:15:59.645 "aliases": [ 00:15:59.645 "b574948a-386f-420c-8faf-db300d5b4fb5" 00:15:59.645 ], 00:15:59.645 "product_name": "Malloc disk", 00:15:59.645 "block_size": 512, 00:15:59.645 "num_blocks": 65536, 00:15:59.645 "uuid": "b574948a-386f-420c-8faf-db300d5b4fb5", 00:15:59.645 "assigned_rate_limits": { 00:15:59.645 "rw_ios_per_sec": 0, 00:15:59.645 "rw_mbytes_per_sec": 0, 00:15:59.645 "r_mbytes_per_sec": 0, 00:15:59.645 "w_mbytes_per_sec": 0 00:15:59.645 }, 00:15:59.645 "claimed": false, 00:15:59.646 "zoned": false, 00:15:59.646 "supported_io_types": { 00:15:59.646 "read": true, 00:15:59.646 "write": true, 00:15:59.646 "unmap": true, 00:15:59.646 "flush": true, 00:15:59.646 "reset": true, 00:15:59.646 "nvme_admin": false, 00:15:59.646 "nvme_io": false, 00:15:59.646 "nvme_io_md": false, 00:15:59.646 "write_zeroes": true, 00:15:59.646 "zcopy": true, 00:15:59.646 "get_zone_info": false, 00:15:59.646 "zone_management": false, 00:15:59.646 "zone_append": false, 00:15:59.646 "compare": false, 00:15:59.646 "compare_and_write": false, 00:15:59.646 "abort": true, 00:15:59.646 "seek_hole": false, 00:15:59.646 "seek_data": false, 00:15:59.646 "copy": true, 00:15:59.646 "nvme_iov_md": false 00:15:59.646 }, 00:15:59.646 "memory_domains": [ 00:15:59.646 { 00:15:59.646 "dma_device_id": "system", 00:15:59.646 "dma_device_type": 1 00:15:59.646 }, 00:15:59.646 { 00:15:59.646 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:59.646 "dma_device_type": 2 00:15:59.646 } 00:15:59.646 ], 00:15:59.646 "driver_specific": {} 00:15:59.646 } 00:15:59.646 ] 00:15:59.646 19:14:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.646 19:14:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:59.646 19:14:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:59.646 19:14:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:59.646 19:14:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:59.646 19:14:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.646 19:14:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.646 [2024-11-27 19:14:09.165291] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:59.646 [2024-11-27 19:14:09.165417] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:59.646 [2024-11-27 19:14:09.165459] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:59.646 [2024-11-27 19:14:09.167519] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:59.646 19:14:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.646 19:14:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:59.646 19:14:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:59.646 19:14:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:59.646 19:14:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:59.646 19:14:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:59.646 19:14:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:59.646 19:14:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:59.646 19:14:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:59.646 19:14:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:59.646 19:14:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:59.646 19:14:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:59.646 19:14:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.646 19:14:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.646 19:14:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.646 19:14:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.646 19:14:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:59.646 "name": "Existed_Raid", 00:15:59.646 "uuid": "6f616863-8b16-40d8-8722-3d2561111b1f", 00:15:59.646 "strip_size_kb": 64, 00:15:59.646 "state": "configuring", 00:15:59.646 "raid_level": "raid5f", 00:15:59.646 "superblock": true, 00:15:59.646 "num_base_bdevs": 3, 00:15:59.646 "num_base_bdevs_discovered": 2, 00:15:59.646 "num_base_bdevs_operational": 3, 00:15:59.646 "base_bdevs_list": [ 00:15:59.646 { 00:15:59.646 "name": "BaseBdev1", 00:15:59.646 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:59.646 "is_configured": false, 00:15:59.646 "data_offset": 0, 00:15:59.646 "data_size": 0 00:15:59.646 }, 00:15:59.646 { 00:15:59.646 "name": "BaseBdev2", 00:15:59.646 "uuid": "fa5794bf-0cff-4d58-aa47-8e4da099e39f", 00:15:59.646 "is_configured": true, 00:15:59.646 "data_offset": 2048, 00:15:59.646 "data_size": 63488 00:15:59.646 }, 00:15:59.646 { 00:15:59.646 "name": "BaseBdev3", 00:15:59.646 "uuid": "b574948a-386f-420c-8faf-db300d5b4fb5", 00:15:59.646 "is_configured": true, 00:15:59.646 "data_offset": 2048, 00:15:59.646 "data_size": 63488 00:15:59.646 } 00:15:59.646 ] 00:15:59.646 }' 00:15:59.646 19:14:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:59.646 19:14:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:00.217 19:14:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:16:00.217 19:14:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.217 19:14:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:00.217 [2024-11-27 19:14:09.648478] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:00.217 19:14:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.217 19:14:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:00.217 19:14:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:00.217 19:14:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:00.217 19:14:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:00.217 19:14:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:00.217 19:14:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:00.217 19:14:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:00.217 19:14:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:00.217 19:14:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:00.217 19:14:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:00.217 19:14:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:00.218 19:14:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:00.218 19:14:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.218 19:14:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:00.218 19:14:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.218 19:14:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:00.218 "name": "Existed_Raid", 00:16:00.218 "uuid": "6f616863-8b16-40d8-8722-3d2561111b1f", 00:16:00.218 "strip_size_kb": 64, 00:16:00.218 "state": "configuring", 00:16:00.218 "raid_level": "raid5f", 00:16:00.218 "superblock": true, 00:16:00.218 "num_base_bdevs": 3, 00:16:00.218 "num_base_bdevs_discovered": 1, 00:16:00.218 "num_base_bdevs_operational": 3, 00:16:00.218 "base_bdevs_list": [ 00:16:00.218 { 00:16:00.218 "name": "BaseBdev1", 00:16:00.218 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:00.218 "is_configured": false, 00:16:00.218 "data_offset": 0, 00:16:00.218 "data_size": 0 00:16:00.218 }, 00:16:00.218 { 00:16:00.218 "name": null, 00:16:00.218 "uuid": "fa5794bf-0cff-4d58-aa47-8e4da099e39f", 00:16:00.218 "is_configured": false, 00:16:00.218 "data_offset": 0, 00:16:00.218 "data_size": 63488 00:16:00.218 }, 00:16:00.218 { 00:16:00.218 "name": "BaseBdev3", 00:16:00.218 "uuid": "b574948a-386f-420c-8faf-db300d5b4fb5", 00:16:00.218 "is_configured": true, 00:16:00.218 "data_offset": 2048, 00:16:00.218 "data_size": 63488 00:16:00.218 } 00:16:00.218 ] 00:16:00.218 }' 00:16:00.218 19:14:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:00.218 19:14:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:00.787 19:14:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:00.787 19:14:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:00.787 19:14:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.787 19:14:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:00.787 19:14:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.787 19:14:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:16:00.787 19:14:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:00.787 19:14:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.787 19:14:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:00.787 [2024-11-27 19:14:10.197787] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:00.787 BaseBdev1 00:16:00.787 19:14:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.787 19:14:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:16:00.787 19:14:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:16:00.787 19:14:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:00.787 19:14:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:00.787 19:14:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:00.787 19:14:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:00.787 19:14:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:00.787 19:14:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.787 19:14:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:00.787 19:14:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.787 19:14:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:00.787 19:14:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.787 19:14:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:00.787 [ 00:16:00.787 { 00:16:00.787 "name": "BaseBdev1", 00:16:00.787 "aliases": [ 00:16:00.787 "4a6482dc-db48-4e0b-9ef0-ec486804b58c" 00:16:00.787 ], 00:16:00.787 "product_name": "Malloc disk", 00:16:00.787 "block_size": 512, 00:16:00.787 "num_blocks": 65536, 00:16:00.787 "uuid": "4a6482dc-db48-4e0b-9ef0-ec486804b58c", 00:16:00.787 "assigned_rate_limits": { 00:16:00.787 "rw_ios_per_sec": 0, 00:16:00.787 "rw_mbytes_per_sec": 0, 00:16:00.787 "r_mbytes_per_sec": 0, 00:16:00.787 "w_mbytes_per_sec": 0 00:16:00.787 }, 00:16:00.787 "claimed": true, 00:16:00.787 "claim_type": "exclusive_write", 00:16:00.787 "zoned": false, 00:16:00.787 "supported_io_types": { 00:16:00.787 "read": true, 00:16:00.787 "write": true, 00:16:00.787 "unmap": true, 00:16:00.787 "flush": true, 00:16:00.787 "reset": true, 00:16:00.787 "nvme_admin": false, 00:16:00.787 "nvme_io": false, 00:16:00.787 "nvme_io_md": false, 00:16:00.787 "write_zeroes": true, 00:16:00.787 "zcopy": true, 00:16:00.787 "get_zone_info": false, 00:16:00.787 "zone_management": false, 00:16:00.787 "zone_append": false, 00:16:00.787 "compare": false, 00:16:00.787 "compare_and_write": false, 00:16:00.787 "abort": true, 00:16:00.787 "seek_hole": false, 00:16:00.787 "seek_data": false, 00:16:00.787 "copy": true, 00:16:00.787 "nvme_iov_md": false 00:16:00.787 }, 00:16:00.787 "memory_domains": [ 00:16:00.787 { 00:16:00.787 "dma_device_id": "system", 00:16:00.787 "dma_device_type": 1 00:16:00.787 }, 00:16:00.787 { 00:16:00.787 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:00.787 "dma_device_type": 2 00:16:00.787 } 00:16:00.787 ], 00:16:00.787 "driver_specific": {} 00:16:00.787 } 00:16:00.787 ] 00:16:00.787 19:14:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.787 19:14:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:00.787 19:14:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:00.787 19:14:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:00.787 19:14:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:00.787 19:14:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:00.787 19:14:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:00.787 19:14:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:00.787 19:14:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:00.787 19:14:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:00.787 19:14:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:00.787 19:14:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:00.787 19:14:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:00.787 19:14:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.787 19:14:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:00.787 19:14:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:00.787 19:14:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.787 19:14:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:00.787 "name": "Existed_Raid", 00:16:00.787 "uuid": "6f616863-8b16-40d8-8722-3d2561111b1f", 00:16:00.787 "strip_size_kb": 64, 00:16:00.787 "state": "configuring", 00:16:00.787 "raid_level": "raid5f", 00:16:00.787 "superblock": true, 00:16:00.788 "num_base_bdevs": 3, 00:16:00.788 "num_base_bdevs_discovered": 2, 00:16:00.788 "num_base_bdevs_operational": 3, 00:16:00.788 "base_bdevs_list": [ 00:16:00.788 { 00:16:00.788 "name": "BaseBdev1", 00:16:00.788 "uuid": "4a6482dc-db48-4e0b-9ef0-ec486804b58c", 00:16:00.788 "is_configured": true, 00:16:00.788 "data_offset": 2048, 00:16:00.788 "data_size": 63488 00:16:00.788 }, 00:16:00.788 { 00:16:00.788 "name": null, 00:16:00.788 "uuid": "fa5794bf-0cff-4d58-aa47-8e4da099e39f", 00:16:00.788 "is_configured": false, 00:16:00.788 "data_offset": 0, 00:16:00.788 "data_size": 63488 00:16:00.788 }, 00:16:00.788 { 00:16:00.788 "name": "BaseBdev3", 00:16:00.788 "uuid": "b574948a-386f-420c-8faf-db300d5b4fb5", 00:16:00.788 "is_configured": true, 00:16:00.788 "data_offset": 2048, 00:16:00.788 "data_size": 63488 00:16:00.788 } 00:16:00.788 ] 00:16:00.788 }' 00:16:00.788 19:14:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:00.788 19:14:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:01.358 19:14:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:01.358 19:14:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.358 19:14:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:01.358 19:14:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:01.358 19:14:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.358 19:14:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:16:01.358 19:14:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:16:01.358 19:14:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.358 19:14:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:01.358 [2024-11-27 19:14:10.740871] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:01.358 19:14:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.358 19:14:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:01.358 19:14:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:01.358 19:14:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:01.358 19:14:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:01.358 19:14:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:01.358 19:14:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:01.358 19:14:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:01.358 19:14:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:01.358 19:14:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:01.358 19:14:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:01.358 19:14:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:01.358 19:14:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:01.358 19:14:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.358 19:14:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:01.358 19:14:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.358 19:14:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:01.358 "name": "Existed_Raid", 00:16:01.358 "uuid": "6f616863-8b16-40d8-8722-3d2561111b1f", 00:16:01.358 "strip_size_kb": 64, 00:16:01.358 "state": "configuring", 00:16:01.358 "raid_level": "raid5f", 00:16:01.358 "superblock": true, 00:16:01.358 "num_base_bdevs": 3, 00:16:01.358 "num_base_bdevs_discovered": 1, 00:16:01.358 "num_base_bdevs_operational": 3, 00:16:01.358 "base_bdevs_list": [ 00:16:01.358 { 00:16:01.358 "name": "BaseBdev1", 00:16:01.358 "uuid": "4a6482dc-db48-4e0b-9ef0-ec486804b58c", 00:16:01.358 "is_configured": true, 00:16:01.358 "data_offset": 2048, 00:16:01.358 "data_size": 63488 00:16:01.358 }, 00:16:01.358 { 00:16:01.358 "name": null, 00:16:01.358 "uuid": "fa5794bf-0cff-4d58-aa47-8e4da099e39f", 00:16:01.358 "is_configured": false, 00:16:01.358 "data_offset": 0, 00:16:01.358 "data_size": 63488 00:16:01.358 }, 00:16:01.358 { 00:16:01.358 "name": null, 00:16:01.358 "uuid": "b574948a-386f-420c-8faf-db300d5b4fb5", 00:16:01.358 "is_configured": false, 00:16:01.358 "data_offset": 0, 00:16:01.358 "data_size": 63488 00:16:01.358 } 00:16:01.358 ] 00:16:01.358 }' 00:16:01.358 19:14:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:01.358 19:14:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:01.618 19:14:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:01.618 19:14:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:01.618 19:14:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.618 19:14:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:01.618 19:14:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.618 19:14:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:16:01.618 19:14:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:16:01.618 19:14:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.618 19:14:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:01.618 [2024-11-27 19:14:11.200127] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:01.618 19:14:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.618 19:14:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:01.618 19:14:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:01.618 19:14:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:01.618 19:14:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:01.618 19:14:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:01.618 19:14:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:01.618 19:14:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:01.618 19:14:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:01.618 19:14:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:01.618 19:14:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:01.618 19:14:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:01.618 19:14:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.618 19:14:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:01.618 19:14:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:01.618 19:14:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.618 19:14:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:01.618 "name": "Existed_Raid", 00:16:01.618 "uuid": "6f616863-8b16-40d8-8722-3d2561111b1f", 00:16:01.618 "strip_size_kb": 64, 00:16:01.618 "state": "configuring", 00:16:01.618 "raid_level": "raid5f", 00:16:01.619 "superblock": true, 00:16:01.619 "num_base_bdevs": 3, 00:16:01.619 "num_base_bdevs_discovered": 2, 00:16:01.619 "num_base_bdevs_operational": 3, 00:16:01.619 "base_bdevs_list": [ 00:16:01.619 { 00:16:01.619 "name": "BaseBdev1", 00:16:01.619 "uuid": "4a6482dc-db48-4e0b-9ef0-ec486804b58c", 00:16:01.619 "is_configured": true, 00:16:01.619 "data_offset": 2048, 00:16:01.619 "data_size": 63488 00:16:01.619 }, 00:16:01.619 { 00:16:01.619 "name": null, 00:16:01.619 "uuid": "fa5794bf-0cff-4d58-aa47-8e4da099e39f", 00:16:01.619 "is_configured": false, 00:16:01.619 "data_offset": 0, 00:16:01.619 "data_size": 63488 00:16:01.619 }, 00:16:01.619 { 00:16:01.619 "name": "BaseBdev3", 00:16:01.619 "uuid": "b574948a-386f-420c-8faf-db300d5b4fb5", 00:16:01.619 "is_configured": true, 00:16:01.619 "data_offset": 2048, 00:16:01.619 "data_size": 63488 00:16:01.619 } 00:16:01.619 ] 00:16:01.619 }' 00:16:01.619 19:14:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:01.619 19:14:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:02.189 19:14:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:02.189 19:14:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:02.189 19:14:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.189 19:14:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:02.189 19:14:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.189 19:14:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:16:02.189 19:14:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:02.189 19:14:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.189 19:14:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:02.189 [2024-11-27 19:14:11.695740] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:02.189 19:14:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.189 19:14:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:02.189 19:14:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:02.189 19:14:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:02.189 19:14:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:02.189 19:14:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:02.189 19:14:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:02.189 19:14:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:02.189 19:14:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:02.189 19:14:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:02.189 19:14:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:02.189 19:14:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:02.189 19:14:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:02.189 19:14:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.189 19:14:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:02.449 19:14:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.449 19:14:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:02.449 "name": "Existed_Raid", 00:16:02.449 "uuid": "6f616863-8b16-40d8-8722-3d2561111b1f", 00:16:02.449 "strip_size_kb": 64, 00:16:02.449 "state": "configuring", 00:16:02.449 "raid_level": "raid5f", 00:16:02.449 "superblock": true, 00:16:02.449 "num_base_bdevs": 3, 00:16:02.449 "num_base_bdevs_discovered": 1, 00:16:02.449 "num_base_bdevs_operational": 3, 00:16:02.449 "base_bdevs_list": [ 00:16:02.449 { 00:16:02.449 "name": null, 00:16:02.449 "uuid": "4a6482dc-db48-4e0b-9ef0-ec486804b58c", 00:16:02.449 "is_configured": false, 00:16:02.449 "data_offset": 0, 00:16:02.449 "data_size": 63488 00:16:02.449 }, 00:16:02.449 { 00:16:02.449 "name": null, 00:16:02.449 "uuid": "fa5794bf-0cff-4d58-aa47-8e4da099e39f", 00:16:02.449 "is_configured": false, 00:16:02.449 "data_offset": 0, 00:16:02.449 "data_size": 63488 00:16:02.449 }, 00:16:02.449 { 00:16:02.449 "name": "BaseBdev3", 00:16:02.449 "uuid": "b574948a-386f-420c-8faf-db300d5b4fb5", 00:16:02.449 "is_configured": true, 00:16:02.449 "data_offset": 2048, 00:16:02.449 "data_size": 63488 00:16:02.449 } 00:16:02.449 ] 00:16:02.449 }' 00:16:02.449 19:14:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:02.449 19:14:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:02.709 19:14:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:02.709 19:14:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:02.709 19:14:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.709 19:14:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:02.709 19:14:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.709 19:14:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:16:02.709 19:14:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:16:02.709 19:14:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.709 19:14:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:02.709 [2024-11-27 19:14:12.245050] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:02.709 19:14:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.709 19:14:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:02.709 19:14:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:02.709 19:14:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:02.709 19:14:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:02.709 19:14:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:02.709 19:14:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:02.709 19:14:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:02.709 19:14:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:02.709 19:14:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:02.709 19:14:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:02.709 19:14:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:02.709 19:14:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.709 19:14:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:02.709 19:14:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:02.709 19:14:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.709 19:14:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:02.709 "name": "Existed_Raid", 00:16:02.709 "uuid": "6f616863-8b16-40d8-8722-3d2561111b1f", 00:16:02.709 "strip_size_kb": 64, 00:16:02.709 "state": "configuring", 00:16:02.709 "raid_level": "raid5f", 00:16:02.709 "superblock": true, 00:16:02.709 "num_base_bdevs": 3, 00:16:02.709 "num_base_bdevs_discovered": 2, 00:16:02.709 "num_base_bdevs_operational": 3, 00:16:02.709 "base_bdevs_list": [ 00:16:02.709 { 00:16:02.709 "name": null, 00:16:02.709 "uuid": "4a6482dc-db48-4e0b-9ef0-ec486804b58c", 00:16:02.709 "is_configured": false, 00:16:02.709 "data_offset": 0, 00:16:02.710 "data_size": 63488 00:16:02.710 }, 00:16:02.710 { 00:16:02.710 "name": "BaseBdev2", 00:16:02.710 "uuid": "fa5794bf-0cff-4d58-aa47-8e4da099e39f", 00:16:02.710 "is_configured": true, 00:16:02.710 "data_offset": 2048, 00:16:02.710 "data_size": 63488 00:16:02.710 }, 00:16:02.710 { 00:16:02.710 "name": "BaseBdev3", 00:16:02.710 "uuid": "b574948a-386f-420c-8faf-db300d5b4fb5", 00:16:02.710 "is_configured": true, 00:16:02.710 "data_offset": 2048, 00:16:02.710 "data_size": 63488 00:16:02.710 } 00:16:02.710 ] 00:16:02.710 }' 00:16:02.710 19:14:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:02.710 19:14:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:03.279 19:14:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:03.279 19:14:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:03.279 19:14:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.279 19:14:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:03.279 19:14:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.280 19:14:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:16:03.280 19:14:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:03.280 19:14:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:16:03.280 19:14:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.280 19:14:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:03.280 19:14:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.280 19:14:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 4a6482dc-db48-4e0b-9ef0-ec486804b58c 00:16:03.280 19:14:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.280 19:14:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:03.280 [2024-11-27 19:14:12.845719] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:16:03.280 [2024-11-27 19:14:12.845972] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:16:03.280 [2024-11-27 19:14:12.845992] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:16:03.280 [2024-11-27 19:14:12.846268] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:16:03.280 NewBaseBdev 00:16:03.280 19:14:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.280 19:14:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:16:03.280 19:14:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:16:03.280 19:14:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:03.280 19:14:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:03.280 19:14:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:03.280 19:14:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:03.280 19:14:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:03.280 19:14:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.280 19:14:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:03.280 [2024-11-27 19:14:12.851671] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:16:03.280 [2024-11-27 19:14:12.851702] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:16:03.280 [2024-11-27 19:14:12.851890] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:03.280 19:14:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.280 19:14:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:16:03.280 19:14:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.280 19:14:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:03.280 [ 00:16:03.280 { 00:16:03.280 "name": "NewBaseBdev", 00:16:03.280 "aliases": [ 00:16:03.280 "4a6482dc-db48-4e0b-9ef0-ec486804b58c" 00:16:03.280 ], 00:16:03.280 "product_name": "Malloc disk", 00:16:03.280 "block_size": 512, 00:16:03.280 "num_blocks": 65536, 00:16:03.280 "uuid": "4a6482dc-db48-4e0b-9ef0-ec486804b58c", 00:16:03.280 "assigned_rate_limits": { 00:16:03.280 "rw_ios_per_sec": 0, 00:16:03.280 "rw_mbytes_per_sec": 0, 00:16:03.280 "r_mbytes_per_sec": 0, 00:16:03.280 "w_mbytes_per_sec": 0 00:16:03.280 }, 00:16:03.280 "claimed": true, 00:16:03.280 "claim_type": "exclusive_write", 00:16:03.280 "zoned": false, 00:16:03.280 "supported_io_types": { 00:16:03.280 "read": true, 00:16:03.280 "write": true, 00:16:03.280 "unmap": true, 00:16:03.280 "flush": true, 00:16:03.280 "reset": true, 00:16:03.280 "nvme_admin": false, 00:16:03.280 "nvme_io": false, 00:16:03.280 "nvme_io_md": false, 00:16:03.280 "write_zeroes": true, 00:16:03.280 "zcopy": true, 00:16:03.280 "get_zone_info": false, 00:16:03.280 "zone_management": false, 00:16:03.280 "zone_append": false, 00:16:03.280 "compare": false, 00:16:03.280 "compare_and_write": false, 00:16:03.280 "abort": true, 00:16:03.280 "seek_hole": false, 00:16:03.280 "seek_data": false, 00:16:03.280 "copy": true, 00:16:03.280 "nvme_iov_md": false 00:16:03.280 }, 00:16:03.280 "memory_domains": [ 00:16:03.280 { 00:16:03.280 "dma_device_id": "system", 00:16:03.280 "dma_device_type": 1 00:16:03.280 }, 00:16:03.280 { 00:16:03.280 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:03.280 "dma_device_type": 2 00:16:03.280 } 00:16:03.280 ], 00:16:03.280 "driver_specific": {} 00:16:03.280 } 00:16:03.280 ] 00:16:03.280 19:14:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.280 19:14:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:03.280 19:14:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:16:03.280 19:14:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:03.280 19:14:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:03.280 19:14:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:03.280 19:14:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:03.280 19:14:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:03.280 19:14:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:03.280 19:14:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:03.280 19:14:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:03.280 19:14:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:03.280 19:14:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:03.280 19:14:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:03.280 19:14:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.280 19:14:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:03.280 19:14:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.541 19:14:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:03.541 "name": "Existed_Raid", 00:16:03.541 "uuid": "6f616863-8b16-40d8-8722-3d2561111b1f", 00:16:03.541 "strip_size_kb": 64, 00:16:03.541 "state": "online", 00:16:03.541 "raid_level": "raid5f", 00:16:03.541 "superblock": true, 00:16:03.541 "num_base_bdevs": 3, 00:16:03.541 "num_base_bdevs_discovered": 3, 00:16:03.541 "num_base_bdevs_operational": 3, 00:16:03.541 "base_bdevs_list": [ 00:16:03.541 { 00:16:03.541 "name": "NewBaseBdev", 00:16:03.541 "uuid": "4a6482dc-db48-4e0b-9ef0-ec486804b58c", 00:16:03.541 "is_configured": true, 00:16:03.541 "data_offset": 2048, 00:16:03.541 "data_size": 63488 00:16:03.541 }, 00:16:03.541 { 00:16:03.541 "name": "BaseBdev2", 00:16:03.541 "uuid": "fa5794bf-0cff-4d58-aa47-8e4da099e39f", 00:16:03.541 "is_configured": true, 00:16:03.541 "data_offset": 2048, 00:16:03.541 "data_size": 63488 00:16:03.541 }, 00:16:03.541 { 00:16:03.541 "name": "BaseBdev3", 00:16:03.541 "uuid": "b574948a-386f-420c-8faf-db300d5b4fb5", 00:16:03.541 "is_configured": true, 00:16:03.541 "data_offset": 2048, 00:16:03.541 "data_size": 63488 00:16:03.541 } 00:16:03.541 ] 00:16:03.541 }' 00:16:03.541 19:14:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:03.541 19:14:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:03.801 19:14:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:16:03.801 19:14:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:03.801 19:14:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:03.801 19:14:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:03.801 19:14:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:16:03.801 19:14:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:03.801 19:14:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:03.801 19:14:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:03.801 19:14:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.801 19:14:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:03.801 [2024-11-27 19:14:13.354108] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:03.801 19:14:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.801 19:14:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:03.801 "name": "Existed_Raid", 00:16:03.801 "aliases": [ 00:16:03.801 "6f616863-8b16-40d8-8722-3d2561111b1f" 00:16:03.801 ], 00:16:03.801 "product_name": "Raid Volume", 00:16:03.801 "block_size": 512, 00:16:03.801 "num_blocks": 126976, 00:16:03.801 "uuid": "6f616863-8b16-40d8-8722-3d2561111b1f", 00:16:03.801 "assigned_rate_limits": { 00:16:03.801 "rw_ios_per_sec": 0, 00:16:03.801 "rw_mbytes_per_sec": 0, 00:16:03.801 "r_mbytes_per_sec": 0, 00:16:03.801 "w_mbytes_per_sec": 0 00:16:03.801 }, 00:16:03.801 "claimed": false, 00:16:03.801 "zoned": false, 00:16:03.801 "supported_io_types": { 00:16:03.801 "read": true, 00:16:03.801 "write": true, 00:16:03.801 "unmap": false, 00:16:03.801 "flush": false, 00:16:03.801 "reset": true, 00:16:03.801 "nvme_admin": false, 00:16:03.801 "nvme_io": false, 00:16:03.801 "nvme_io_md": false, 00:16:03.801 "write_zeroes": true, 00:16:03.801 "zcopy": false, 00:16:03.801 "get_zone_info": false, 00:16:03.801 "zone_management": false, 00:16:03.801 "zone_append": false, 00:16:03.801 "compare": false, 00:16:03.801 "compare_and_write": false, 00:16:03.801 "abort": false, 00:16:03.801 "seek_hole": false, 00:16:03.801 "seek_data": false, 00:16:03.801 "copy": false, 00:16:03.801 "nvme_iov_md": false 00:16:03.801 }, 00:16:03.801 "driver_specific": { 00:16:03.801 "raid": { 00:16:03.801 "uuid": "6f616863-8b16-40d8-8722-3d2561111b1f", 00:16:03.801 "strip_size_kb": 64, 00:16:03.801 "state": "online", 00:16:03.801 "raid_level": "raid5f", 00:16:03.801 "superblock": true, 00:16:03.801 "num_base_bdevs": 3, 00:16:03.801 "num_base_bdevs_discovered": 3, 00:16:03.801 "num_base_bdevs_operational": 3, 00:16:03.801 "base_bdevs_list": [ 00:16:03.801 { 00:16:03.801 "name": "NewBaseBdev", 00:16:03.801 "uuid": "4a6482dc-db48-4e0b-9ef0-ec486804b58c", 00:16:03.801 "is_configured": true, 00:16:03.801 "data_offset": 2048, 00:16:03.801 "data_size": 63488 00:16:03.801 }, 00:16:03.801 { 00:16:03.801 "name": "BaseBdev2", 00:16:03.801 "uuid": "fa5794bf-0cff-4d58-aa47-8e4da099e39f", 00:16:03.801 "is_configured": true, 00:16:03.801 "data_offset": 2048, 00:16:03.801 "data_size": 63488 00:16:03.801 }, 00:16:03.801 { 00:16:03.801 "name": "BaseBdev3", 00:16:03.801 "uuid": "b574948a-386f-420c-8faf-db300d5b4fb5", 00:16:03.801 "is_configured": true, 00:16:03.801 "data_offset": 2048, 00:16:03.801 "data_size": 63488 00:16:03.801 } 00:16:03.801 ] 00:16:03.801 } 00:16:03.801 } 00:16:03.801 }' 00:16:03.801 19:14:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:03.801 19:14:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:16:03.801 BaseBdev2 00:16:03.801 BaseBdev3' 00:16:03.801 19:14:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:04.063 19:14:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:04.063 19:14:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:04.063 19:14:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:16:04.063 19:14:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.063 19:14:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.063 19:14:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:04.063 19:14:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.063 19:14:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:04.063 19:14:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:04.063 19:14:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:04.063 19:14:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:04.063 19:14:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:04.063 19:14:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.063 19:14:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.063 19:14:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.063 19:14:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:04.063 19:14:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:04.063 19:14:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:04.063 19:14:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:04.063 19:14:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:04.063 19:14:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.063 19:14:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.063 19:14:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.063 19:14:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:04.063 19:14:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:04.063 19:14:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:04.063 19:14:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.063 19:14:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.063 [2024-11-27 19:14:13.621449] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:04.063 [2024-11-27 19:14:13.621477] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:04.063 [2024-11-27 19:14:13.621550] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:04.063 [2024-11-27 19:14:13.621884] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:04.063 [2024-11-27 19:14:13.621904] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:16:04.063 19:14:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.063 19:14:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 80587 00:16:04.063 19:14:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 80587 ']' 00:16:04.063 19:14:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 80587 00:16:04.063 19:14:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:16:04.063 19:14:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:04.063 19:14:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80587 00:16:04.063 killing process with pid 80587 00:16:04.063 19:14:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:04.063 19:14:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:04.063 19:14:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80587' 00:16:04.063 19:14:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 80587 00:16:04.063 [2024-11-27 19:14:13.670731] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:04.063 19:14:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 80587 00:16:04.633 [2024-11-27 19:14:13.986724] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:05.574 19:14:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:16:05.574 00:16:05.574 real 0m10.756s 00:16:05.574 user 0m16.845s 00:16:05.574 sys 0m2.107s 00:16:05.574 19:14:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:05.574 19:14:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:05.574 ************************************ 00:16:05.574 END TEST raid5f_state_function_test_sb 00:16:05.574 ************************************ 00:16:05.835 19:14:15 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 3 00:16:05.835 19:14:15 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:16:05.835 19:14:15 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:05.835 19:14:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:05.835 ************************************ 00:16:05.835 START TEST raid5f_superblock_test 00:16:05.835 ************************************ 00:16:05.835 19:14:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid5f 3 00:16:05.835 19:14:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:16:05.835 19:14:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:16:05.835 19:14:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:16:05.835 19:14:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:16:05.835 19:14:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:16:05.835 19:14:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:16:05.835 19:14:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:16:05.835 19:14:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:16:05.835 19:14:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:16:05.835 19:14:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:16:05.835 19:14:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:16:05.835 19:14:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:16:05.835 19:14:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:16:05.835 19:14:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:16:05.835 19:14:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:16:05.835 19:14:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:16:05.835 19:14:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=81207 00:16:05.835 19:14:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:16:05.835 19:14:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 81207 00:16:05.835 19:14:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 81207 ']' 00:16:05.835 19:14:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:05.835 19:14:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:05.835 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:05.835 19:14:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:05.835 19:14:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:05.835 19:14:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.835 [2024-11-27 19:14:15.348656] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:16:05.835 [2024-11-27 19:14:15.348789] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81207 ] 00:16:06.095 [2024-11-27 19:14:15.528614] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:06.095 [2024-11-27 19:14:15.662241] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:06.355 [2024-11-27 19:14:15.882119] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:06.355 [2024-11-27 19:14:15.882179] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:06.615 19:14:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:06.615 19:14:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:16:06.615 19:14:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:16:06.615 19:14:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:06.615 19:14:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:16:06.615 19:14:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:16:06.615 19:14:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:16:06.615 19:14:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:06.615 19:14:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:06.615 19:14:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:06.615 19:14:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:16:06.615 19:14:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.615 19:14:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.615 malloc1 00:16:06.615 19:14:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.615 19:14:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:06.615 19:14:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.615 19:14:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.615 [2024-11-27 19:14:16.208075] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:06.615 [2024-11-27 19:14:16.208145] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:06.615 [2024-11-27 19:14:16.208168] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:06.615 [2024-11-27 19:14:16.208178] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:06.615 [2024-11-27 19:14:16.210650] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:06.615 [2024-11-27 19:14:16.210685] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:06.615 pt1 00:16:06.615 19:14:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.615 19:14:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:06.615 19:14:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:06.615 19:14:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:16:06.615 19:14:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:16:06.615 19:14:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:16:06.615 19:14:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:06.615 19:14:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:06.615 19:14:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:06.615 19:14:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:16:06.615 19:14:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.615 19:14:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.876 malloc2 00:16:06.876 19:14:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.876 19:14:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:06.876 19:14:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.876 19:14:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.876 [2024-11-27 19:14:16.270894] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:06.876 [2024-11-27 19:14:16.270949] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:06.876 [2024-11-27 19:14:16.270976] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:06.876 [2024-11-27 19:14:16.270986] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:06.876 [2024-11-27 19:14:16.273392] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:06.876 [2024-11-27 19:14:16.273424] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:06.876 pt2 00:16:06.876 19:14:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.876 19:14:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:06.876 19:14:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:06.876 19:14:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:16:06.876 19:14:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:16:06.876 19:14:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:16:06.876 19:14:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:06.876 19:14:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:06.876 19:14:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:06.876 19:14:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:16:06.876 19:14:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.876 19:14:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.876 malloc3 00:16:06.876 19:14:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.876 19:14:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:06.876 19:14:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.876 19:14:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.876 [2024-11-27 19:14:16.344116] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:06.876 [2024-11-27 19:14:16.344170] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:06.876 [2024-11-27 19:14:16.344192] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:06.876 [2024-11-27 19:14:16.344201] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:06.876 [2024-11-27 19:14:16.346553] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:06.876 [2024-11-27 19:14:16.346586] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:06.876 pt3 00:16:06.876 19:14:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.876 19:14:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:06.876 19:14:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:06.876 19:14:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:16:06.876 19:14:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.876 19:14:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.876 [2024-11-27 19:14:16.356153] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:06.876 [2024-11-27 19:14:16.358179] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:06.876 [2024-11-27 19:14:16.358246] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:06.876 [2024-11-27 19:14:16.358413] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:06.876 [2024-11-27 19:14:16.358432] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:16:06.876 [2024-11-27 19:14:16.358701] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:16:06.876 [2024-11-27 19:14:16.364128] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:06.876 [2024-11-27 19:14:16.364149] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:06.876 [2024-11-27 19:14:16.364338] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:06.876 19:14:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.876 19:14:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:06.876 19:14:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:06.876 19:14:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:06.876 19:14:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:06.876 19:14:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:06.876 19:14:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:06.876 19:14:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:06.876 19:14:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:06.876 19:14:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:06.876 19:14:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:06.876 19:14:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:06.876 19:14:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.876 19:14:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:06.876 19:14:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.876 19:14:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.876 19:14:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:06.876 "name": "raid_bdev1", 00:16:06.876 "uuid": "5e5408d8-9ba1-4144-8c08-7b0d52293fcf", 00:16:06.876 "strip_size_kb": 64, 00:16:06.876 "state": "online", 00:16:06.876 "raid_level": "raid5f", 00:16:06.876 "superblock": true, 00:16:06.876 "num_base_bdevs": 3, 00:16:06.876 "num_base_bdevs_discovered": 3, 00:16:06.876 "num_base_bdevs_operational": 3, 00:16:06.876 "base_bdevs_list": [ 00:16:06.876 { 00:16:06.876 "name": "pt1", 00:16:06.876 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:06.876 "is_configured": true, 00:16:06.876 "data_offset": 2048, 00:16:06.876 "data_size": 63488 00:16:06.876 }, 00:16:06.876 { 00:16:06.876 "name": "pt2", 00:16:06.876 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:06.876 "is_configured": true, 00:16:06.876 "data_offset": 2048, 00:16:06.876 "data_size": 63488 00:16:06.876 }, 00:16:06.876 { 00:16:06.876 "name": "pt3", 00:16:06.876 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:06.876 "is_configured": true, 00:16:06.876 "data_offset": 2048, 00:16:06.876 "data_size": 63488 00:16:06.876 } 00:16:06.876 ] 00:16:06.876 }' 00:16:06.876 19:14:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:06.876 19:14:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.447 19:14:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:16:07.447 19:14:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:07.447 19:14:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:07.447 19:14:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:07.447 19:14:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:07.447 19:14:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:07.447 19:14:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:07.447 19:14:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:07.447 19:14:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.447 19:14:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.447 [2024-11-27 19:14:16.814872] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:07.447 19:14:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.447 19:14:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:07.447 "name": "raid_bdev1", 00:16:07.447 "aliases": [ 00:16:07.447 "5e5408d8-9ba1-4144-8c08-7b0d52293fcf" 00:16:07.447 ], 00:16:07.447 "product_name": "Raid Volume", 00:16:07.447 "block_size": 512, 00:16:07.447 "num_blocks": 126976, 00:16:07.447 "uuid": "5e5408d8-9ba1-4144-8c08-7b0d52293fcf", 00:16:07.448 "assigned_rate_limits": { 00:16:07.448 "rw_ios_per_sec": 0, 00:16:07.448 "rw_mbytes_per_sec": 0, 00:16:07.448 "r_mbytes_per_sec": 0, 00:16:07.448 "w_mbytes_per_sec": 0 00:16:07.448 }, 00:16:07.448 "claimed": false, 00:16:07.448 "zoned": false, 00:16:07.448 "supported_io_types": { 00:16:07.448 "read": true, 00:16:07.448 "write": true, 00:16:07.448 "unmap": false, 00:16:07.448 "flush": false, 00:16:07.448 "reset": true, 00:16:07.448 "nvme_admin": false, 00:16:07.448 "nvme_io": false, 00:16:07.448 "nvme_io_md": false, 00:16:07.448 "write_zeroes": true, 00:16:07.448 "zcopy": false, 00:16:07.448 "get_zone_info": false, 00:16:07.448 "zone_management": false, 00:16:07.448 "zone_append": false, 00:16:07.448 "compare": false, 00:16:07.448 "compare_and_write": false, 00:16:07.448 "abort": false, 00:16:07.448 "seek_hole": false, 00:16:07.448 "seek_data": false, 00:16:07.448 "copy": false, 00:16:07.448 "nvme_iov_md": false 00:16:07.448 }, 00:16:07.448 "driver_specific": { 00:16:07.448 "raid": { 00:16:07.448 "uuid": "5e5408d8-9ba1-4144-8c08-7b0d52293fcf", 00:16:07.448 "strip_size_kb": 64, 00:16:07.448 "state": "online", 00:16:07.448 "raid_level": "raid5f", 00:16:07.448 "superblock": true, 00:16:07.448 "num_base_bdevs": 3, 00:16:07.448 "num_base_bdevs_discovered": 3, 00:16:07.448 "num_base_bdevs_operational": 3, 00:16:07.448 "base_bdevs_list": [ 00:16:07.448 { 00:16:07.448 "name": "pt1", 00:16:07.448 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:07.448 "is_configured": true, 00:16:07.448 "data_offset": 2048, 00:16:07.448 "data_size": 63488 00:16:07.448 }, 00:16:07.448 { 00:16:07.448 "name": "pt2", 00:16:07.448 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:07.448 "is_configured": true, 00:16:07.448 "data_offset": 2048, 00:16:07.448 "data_size": 63488 00:16:07.448 }, 00:16:07.448 { 00:16:07.448 "name": "pt3", 00:16:07.448 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:07.448 "is_configured": true, 00:16:07.448 "data_offset": 2048, 00:16:07.448 "data_size": 63488 00:16:07.448 } 00:16:07.448 ] 00:16:07.448 } 00:16:07.448 } 00:16:07.448 }' 00:16:07.448 19:14:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:07.448 19:14:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:07.448 pt2 00:16:07.448 pt3' 00:16:07.448 19:14:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:07.448 19:14:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:07.448 19:14:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:07.448 19:14:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:07.448 19:14:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:07.448 19:14:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.448 19:14:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.448 19:14:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.448 19:14:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:07.448 19:14:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:07.448 19:14:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:07.448 19:14:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:07.448 19:14:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.448 19:14:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.448 19:14:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:07.448 19:14:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.448 19:14:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:07.448 19:14:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:07.448 19:14:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:07.448 19:14:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:07.448 19:14:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:16:07.448 19:14:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.448 19:14:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.448 19:14:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.448 19:14:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:07.448 19:14:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:07.448 19:14:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:07.448 19:14:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.448 19:14:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.448 19:14:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:16:07.448 [2024-11-27 19:14:17.054384] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:07.448 19:14:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.709 19:14:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=5e5408d8-9ba1-4144-8c08-7b0d52293fcf 00:16:07.709 19:14:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 5e5408d8-9ba1-4144-8c08-7b0d52293fcf ']' 00:16:07.709 19:14:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:07.709 19:14:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.709 19:14:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.709 [2024-11-27 19:14:17.102130] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:07.709 [2024-11-27 19:14:17.102158] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:07.709 [2024-11-27 19:14:17.102234] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:07.709 [2024-11-27 19:14:17.102309] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:07.709 [2024-11-27 19:14:17.102319] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:07.709 19:14:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.709 19:14:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:07.709 19:14:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.709 19:14:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.709 19:14:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:16:07.709 19:14:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.709 19:14:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:16:07.709 19:14:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:16:07.709 19:14:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:07.709 19:14:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:16:07.709 19:14:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.709 19:14:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.709 19:14:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.709 19:14:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:07.709 19:14:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:16:07.709 19:14:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.709 19:14:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.709 19:14:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.709 19:14:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:07.709 19:14:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:16:07.709 19:14:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.709 19:14:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.709 19:14:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.709 19:14:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:16:07.709 19:14:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:16:07.709 19:14:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.709 19:14:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.709 19:14:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.709 19:14:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:16:07.709 19:14:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:16:07.709 19:14:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:16:07.709 19:14:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:16:07.709 19:14:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:07.709 19:14:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:07.709 19:14:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:07.709 19:14:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:07.709 19:14:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:16:07.709 19:14:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.709 19:14:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.709 [2024-11-27 19:14:17.249922] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:16:07.709 [2024-11-27 19:14:17.252124] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:16:07.709 [2024-11-27 19:14:17.252180] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:16:07.709 [2024-11-27 19:14:17.252233] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:16:07.709 [2024-11-27 19:14:17.252280] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:16:07.709 [2024-11-27 19:14:17.252299] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:16:07.709 [2024-11-27 19:14:17.252317] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:07.709 [2024-11-27 19:14:17.252326] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:16:07.709 request: 00:16:07.709 { 00:16:07.709 "name": "raid_bdev1", 00:16:07.709 "raid_level": "raid5f", 00:16:07.709 "base_bdevs": [ 00:16:07.709 "malloc1", 00:16:07.709 "malloc2", 00:16:07.709 "malloc3" 00:16:07.709 ], 00:16:07.709 "strip_size_kb": 64, 00:16:07.709 "superblock": false, 00:16:07.709 "method": "bdev_raid_create", 00:16:07.709 "req_id": 1 00:16:07.709 } 00:16:07.709 Got JSON-RPC error response 00:16:07.709 response: 00:16:07.709 { 00:16:07.709 "code": -17, 00:16:07.709 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:16:07.709 } 00:16:07.709 19:14:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:07.709 19:14:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:16:07.709 19:14:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:07.709 19:14:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:07.709 19:14:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:07.709 19:14:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:07.709 19:14:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.709 19:14:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.709 19:14:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:16:07.709 19:14:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.709 19:14:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:16:07.709 19:14:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:16:07.709 19:14:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:07.709 19:14:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.709 19:14:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.709 [2024-11-27 19:14:17.317825] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:07.709 [2024-11-27 19:14:17.317866] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:07.709 [2024-11-27 19:14:17.317884] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:16:07.709 [2024-11-27 19:14:17.317893] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:07.709 [2024-11-27 19:14:17.320361] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:07.709 [2024-11-27 19:14:17.320396] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:07.709 [2024-11-27 19:14:17.320465] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:07.709 [2024-11-27 19:14:17.320510] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:07.709 pt1 00:16:07.709 19:14:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.709 19:14:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:16:07.709 19:14:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:07.709 19:14:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:07.709 19:14:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:07.709 19:14:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:07.709 19:14:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:07.709 19:14:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:07.709 19:14:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:07.709 19:14:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:07.709 19:14:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:07.709 19:14:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:07.709 19:14:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.710 19:14:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.710 19:14:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:07.710 19:14:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.969 19:14:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:07.969 "name": "raid_bdev1", 00:16:07.969 "uuid": "5e5408d8-9ba1-4144-8c08-7b0d52293fcf", 00:16:07.969 "strip_size_kb": 64, 00:16:07.969 "state": "configuring", 00:16:07.969 "raid_level": "raid5f", 00:16:07.969 "superblock": true, 00:16:07.969 "num_base_bdevs": 3, 00:16:07.969 "num_base_bdevs_discovered": 1, 00:16:07.969 "num_base_bdevs_operational": 3, 00:16:07.969 "base_bdevs_list": [ 00:16:07.969 { 00:16:07.969 "name": "pt1", 00:16:07.969 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:07.969 "is_configured": true, 00:16:07.969 "data_offset": 2048, 00:16:07.969 "data_size": 63488 00:16:07.969 }, 00:16:07.969 { 00:16:07.969 "name": null, 00:16:07.969 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:07.969 "is_configured": false, 00:16:07.969 "data_offset": 2048, 00:16:07.969 "data_size": 63488 00:16:07.969 }, 00:16:07.969 { 00:16:07.969 "name": null, 00:16:07.969 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:07.969 "is_configured": false, 00:16:07.969 "data_offset": 2048, 00:16:07.969 "data_size": 63488 00:16:07.969 } 00:16:07.969 ] 00:16:07.969 }' 00:16:07.969 19:14:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:07.969 19:14:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.228 19:14:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:16:08.228 19:14:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:08.228 19:14:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.228 19:14:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.228 [2024-11-27 19:14:17.796991] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:08.228 [2024-11-27 19:14:17.797054] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:08.228 [2024-11-27 19:14:17.797077] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:16:08.228 [2024-11-27 19:14:17.797087] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:08.228 [2024-11-27 19:14:17.797568] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:08.228 [2024-11-27 19:14:17.797603] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:08.228 [2024-11-27 19:14:17.797705] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:08.228 [2024-11-27 19:14:17.797736] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:08.228 pt2 00:16:08.228 19:14:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.228 19:14:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:16:08.228 19:14:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.228 19:14:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.228 [2024-11-27 19:14:17.804980] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:16:08.228 19:14:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.228 19:14:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:16:08.228 19:14:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:08.228 19:14:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:08.228 19:14:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:08.228 19:14:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:08.228 19:14:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:08.228 19:14:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:08.228 19:14:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:08.228 19:14:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:08.228 19:14:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:08.228 19:14:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:08.229 19:14:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.229 19:14:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.229 19:14:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:08.229 19:14:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.488 19:14:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:08.488 "name": "raid_bdev1", 00:16:08.488 "uuid": "5e5408d8-9ba1-4144-8c08-7b0d52293fcf", 00:16:08.488 "strip_size_kb": 64, 00:16:08.488 "state": "configuring", 00:16:08.488 "raid_level": "raid5f", 00:16:08.488 "superblock": true, 00:16:08.488 "num_base_bdevs": 3, 00:16:08.488 "num_base_bdevs_discovered": 1, 00:16:08.488 "num_base_bdevs_operational": 3, 00:16:08.488 "base_bdevs_list": [ 00:16:08.488 { 00:16:08.488 "name": "pt1", 00:16:08.488 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:08.488 "is_configured": true, 00:16:08.488 "data_offset": 2048, 00:16:08.488 "data_size": 63488 00:16:08.488 }, 00:16:08.488 { 00:16:08.488 "name": null, 00:16:08.488 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:08.488 "is_configured": false, 00:16:08.488 "data_offset": 0, 00:16:08.488 "data_size": 63488 00:16:08.488 }, 00:16:08.488 { 00:16:08.488 "name": null, 00:16:08.488 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:08.488 "is_configured": false, 00:16:08.488 "data_offset": 2048, 00:16:08.488 "data_size": 63488 00:16:08.488 } 00:16:08.488 ] 00:16:08.488 }' 00:16:08.488 19:14:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:08.488 19:14:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.748 19:14:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:16:08.748 19:14:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:08.748 19:14:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:08.748 19:14:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.748 19:14:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.748 [2024-11-27 19:14:18.192284] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:08.748 [2024-11-27 19:14:18.192344] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:08.748 [2024-11-27 19:14:18.192360] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:16:08.748 [2024-11-27 19:14:18.192371] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:08.748 [2024-11-27 19:14:18.192834] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:08.748 [2024-11-27 19:14:18.192863] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:08.748 [2024-11-27 19:14:18.192934] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:08.748 [2024-11-27 19:14:18.192957] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:08.748 pt2 00:16:08.748 19:14:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.748 19:14:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:08.749 19:14:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:08.749 19:14:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:08.749 19:14:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.749 19:14:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.749 [2024-11-27 19:14:18.204265] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:08.749 [2024-11-27 19:14:18.204311] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:08.749 [2024-11-27 19:14:18.204323] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:16:08.749 [2024-11-27 19:14:18.204334] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:08.749 [2024-11-27 19:14:18.204729] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:08.749 [2024-11-27 19:14:18.204760] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:08.749 [2024-11-27 19:14:18.204836] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:16:08.749 [2024-11-27 19:14:18.204856] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:08.749 [2024-11-27 19:14:18.204995] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:08.749 [2024-11-27 19:14:18.205017] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:16:08.749 [2024-11-27 19:14:18.205272] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:08.749 [2024-11-27 19:14:18.210429] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:08.749 [2024-11-27 19:14:18.210451] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:16:08.749 [2024-11-27 19:14:18.210615] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:08.749 pt3 00:16:08.749 19:14:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.749 19:14:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:08.749 19:14:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:08.749 19:14:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:08.749 19:14:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:08.749 19:14:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:08.749 19:14:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:08.749 19:14:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:08.749 19:14:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:08.749 19:14:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:08.749 19:14:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:08.749 19:14:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:08.749 19:14:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:08.749 19:14:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:08.749 19:14:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.749 19:14:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.749 19:14:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:08.749 19:14:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.749 19:14:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:08.749 "name": "raid_bdev1", 00:16:08.749 "uuid": "5e5408d8-9ba1-4144-8c08-7b0d52293fcf", 00:16:08.749 "strip_size_kb": 64, 00:16:08.749 "state": "online", 00:16:08.749 "raid_level": "raid5f", 00:16:08.749 "superblock": true, 00:16:08.749 "num_base_bdevs": 3, 00:16:08.749 "num_base_bdevs_discovered": 3, 00:16:08.749 "num_base_bdevs_operational": 3, 00:16:08.749 "base_bdevs_list": [ 00:16:08.749 { 00:16:08.749 "name": "pt1", 00:16:08.749 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:08.749 "is_configured": true, 00:16:08.749 "data_offset": 2048, 00:16:08.749 "data_size": 63488 00:16:08.749 }, 00:16:08.749 { 00:16:08.749 "name": "pt2", 00:16:08.749 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:08.749 "is_configured": true, 00:16:08.749 "data_offset": 2048, 00:16:08.749 "data_size": 63488 00:16:08.749 }, 00:16:08.749 { 00:16:08.749 "name": "pt3", 00:16:08.749 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:08.749 "is_configured": true, 00:16:08.749 "data_offset": 2048, 00:16:08.749 "data_size": 63488 00:16:08.749 } 00:16:08.749 ] 00:16:08.749 }' 00:16:08.749 19:14:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:08.749 19:14:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.009 19:14:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:16:09.009 19:14:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:09.009 19:14:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:09.009 19:14:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:09.270 19:14:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:09.270 19:14:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:09.270 19:14:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:09.270 19:14:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:09.270 19:14:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.270 19:14:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.270 [2024-11-27 19:14:18.657503] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:09.270 19:14:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.270 19:14:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:09.270 "name": "raid_bdev1", 00:16:09.270 "aliases": [ 00:16:09.270 "5e5408d8-9ba1-4144-8c08-7b0d52293fcf" 00:16:09.270 ], 00:16:09.270 "product_name": "Raid Volume", 00:16:09.270 "block_size": 512, 00:16:09.270 "num_blocks": 126976, 00:16:09.270 "uuid": "5e5408d8-9ba1-4144-8c08-7b0d52293fcf", 00:16:09.271 "assigned_rate_limits": { 00:16:09.271 "rw_ios_per_sec": 0, 00:16:09.271 "rw_mbytes_per_sec": 0, 00:16:09.271 "r_mbytes_per_sec": 0, 00:16:09.271 "w_mbytes_per_sec": 0 00:16:09.271 }, 00:16:09.271 "claimed": false, 00:16:09.271 "zoned": false, 00:16:09.271 "supported_io_types": { 00:16:09.271 "read": true, 00:16:09.271 "write": true, 00:16:09.271 "unmap": false, 00:16:09.271 "flush": false, 00:16:09.271 "reset": true, 00:16:09.271 "nvme_admin": false, 00:16:09.271 "nvme_io": false, 00:16:09.271 "nvme_io_md": false, 00:16:09.271 "write_zeroes": true, 00:16:09.271 "zcopy": false, 00:16:09.271 "get_zone_info": false, 00:16:09.271 "zone_management": false, 00:16:09.271 "zone_append": false, 00:16:09.271 "compare": false, 00:16:09.271 "compare_and_write": false, 00:16:09.271 "abort": false, 00:16:09.271 "seek_hole": false, 00:16:09.271 "seek_data": false, 00:16:09.271 "copy": false, 00:16:09.271 "nvme_iov_md": false 00:16:09.271 }, 00:16:09.271 "driver_specific": { 00:16:09.271 "raid": { 00:16:09.271 "uuid": "5e5408d8-9ba1-4144-8c08-7b0d52293fcf", 00:16:09.271 "strip_size_kb": 64, 00:16:09.271 "state": "online", 00:16:09.271 "raid_level": "raid5f", 00:16:09.271 "superblock": true, 00:16:09.271 "num_base_bdevs": 3, 00:16:09.271 "num_base_bdevs_discovered": 3, 00:16:09.271 "num_base_bdevs_operational": 3, 00:16:09.271 "base_bdevs_list": [ 00:16:09.271 { 00:16:09.271 "name": "pt1", 00:16:09.271 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:09.271 "is_configured": true, 00:16:09.271 "data_offset": 2048, 00:16:09.271 "data_size": 63488 00:16:09.271 }, 00:16:09.271 { 00:16:09.271 "name": "pt2", 00:16:09.271 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:09.271 "is_configured": true, 00:16:09.271 "data_offset": 2048, 00:16:09.271 "data_size": 63488 00:16:09.271 }, 00:16:09.271 { 00:16:09.271 "name": "pt3", 00:16:09.271 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:09.271 "is_configured": true, 00:16:09.271 "data_offset": 2048, 00:16:09.271 "data_size": 63488 00:16:09.271 } 00:16:09.271 ] 00:16:09.271 } 00:16:09.271 } 00:16:09.271 }' 00:16:09.271 19:14:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:09.271 19:14:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:09.271 pt2 00:16:09.271 pt3' 00:16:09.271 19:14:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:09.271 19:14:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:09.271 19:14:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:09.271 19:14:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:09.271 19:14:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:09.271 19:14:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.271 19:14:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.271 19:14:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.271 19:14:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:09.271 19:14:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:09.271 19:14:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:09.271 19:14:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:09.271 19:14:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.271 19:14:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:09.271 19:14:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.271 19:14:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.271 19:14:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:09.271 19:14:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:09.271 19:14:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:09.271 19:14:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:09.271 19:14:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:16:09.271 19:14:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.271 19:14:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.271 19:14:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.271 19:14:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:09.271 19:14:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:09.271 19:14:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:16:09.271 19:14:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:09.271 19:14:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.271 19:14:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.271 [2024-11-27 19:14:18.889017] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:09.531 19:14:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.531 19:14:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 5e5408d8-9ba1-4144-8c08-7b0d52293fcf '!=' 5e5408d8-9ba1-4144-8c08-7b0d52293fcf ']' 00:16:09.531 19:14:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:16:09.531 19:14:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:09.531 19:14:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:16:09.531 19:14:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:16:09.531 19:14:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.531 19:14:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.531 [2024-11-27 19:14:18.916866] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:16:09.531 19:14:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.531 19:14:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:09.531 19:14:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:09.531 19:14:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:09.531 19:14:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:09.531 19:14:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:09.531 19:14:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:09.531 19:14:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:09.531 19:14:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:09.531 19:14:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:09.531 19:14:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:09.531 19:14:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:09.531 19:14:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:09.531 19:14:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.531 19:14:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.532 19:14:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.532 19:14:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:09.532 "name": "raid_bdev1", 00:16:09.532 "uuid": "5e5408d8-9ba1-4144-8c08-7b0d52293fcf", 00:16:09.532 "strip_size_kb": 64, 00:16:09.532 "state": "online", 00:16:09.532 "raid_level": "raid5f", 00:16:09.532 "superblock": true, 00:16:09.532 "num_base_bdevs": 3, 00:16:09.532 "num_base_bdevs_discovered": 2, 00:16:09.532 "num_base_bdevs_operational": 2, 00:16:09.532 "base_bdevs_list": [ 00:16:09.532 { 00:16:09.532 "name": null, 00:16:09.532 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:09.532 "is_configured": false, 00:16:09.532 "data_offset": 0, 00:16:09.532 "data_size": 63488 00:16:09.532 }, 00:16:09.532 { 00:16:09.532 "name": "pt2", 00:16:09.532 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:09.532 "is_configured": true, 00:16:09.532 "data_offset": 2048, 00:16:09.532 "data_size": 63488 00:16:09.532 }, 00:16:09.532 { 00:16:09.532 "name": "pt3", 00:16:09.532 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:09.532 "is_configured": true, 00:16:09.532 "data_offset": 2048, 00:16:09.532 "data_size": 63488 00:16:09.532 } 00:16:09.532 ] 00:16:09.532 }' 00:16:09.532 19:14:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:09.532 19:14:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.793 19:14:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:09.793 19:14:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.793 19:14:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.793 [2024-11-27 19:14:19.372017] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:09.793 [2024-11-27 19:14:19.372051] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:09.793 [2024-11-27 19:14:19.372135] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:09.793 [2024-11-27 19:14:19.372199] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:09.793 [2024-11-27 19:14:19.372216] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:16:09.793 19:14:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.793 19:14:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:09.793 19:14:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:16:09.793 19:14:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.793 19:14:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.793 19:14:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.793 19:14:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:16:09.793 19:14:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:16:09.793 19:14:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:16:09.793 19:14:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:10.054 19:14:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:16:10.054 19:14:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.054 19:14:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.054 19:14:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.054 19:14:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:16:10.054 19:14:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:10.054 19:14:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:16:10.054 19:14:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.054 19:14:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.054 19:14:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.054 19:14:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:16:10.054 19:14:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:10.054 19:14:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:16:10.054 19:14:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:16:10.054 19:14:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:10.055 19:14:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.055 19:14:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.055 [2024-11-27 19:14:19.459825] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:10.055 [2024-11-27 19:14:19.459883] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:10.055 [2024-11-27 19:14:19.459901] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:16:10.055 [2024-11-27 19:14:19.459913] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:10.055 [2024-11-27 19:14:19.462385] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:10.055 [2024-11-27 19:14:19.462422] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:10.055 [2024-11-27 19:14:19.462501] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:10.055 [2024-11-27 19:14:19.462556] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:10.055 pt2 00:16:10.055 19:14:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.055 19:14:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:16:10.055 19:14:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:10.055 19:14:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:10.055 19:14:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:10.055 19:14:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:10.055 19:14:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:10.055 19:14:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:10.055 19:14:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:10.055 19:14:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:10.055 19:14:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:10.055 19:14:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:10.055 19:14:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:10.055 19:14:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.055 19:14:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.055 19:14:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.055 19:14:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:10.055 "name": "raid_bdev1", 00:16:10.055 "uuid": "5e5408d8-9ba1-4144-8c08-7b0d52293fcf", 00:16:10.055 "strip_size_kb": 64, 00:16:10.055 "state": "configuring", 00:16:10.055 "raid_level": "raid5f", 00:16:10.055 "superblock": true, 00:16:10.055 "num_base_bdevs": 3, 00:16:10.055 "num_base_bdevs_discovered": 1, 00:16:10.055 "num_base_bdevs_operational": 2, 00:16:10.055 "base_bdevs_list": [ 00:16:10.055 { 00:16:10.055 "name": null, 00:16:10.055 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:10.055 "is_configured": false, 00:16:10.055 "data_offset": 2048, 00:16:10.055 "data_size": 63488 00:16:10.055 }, 00:16:10.055 { 00:16:10.055 "name": "pt2", 00:16:10.055 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:10.055 "is_configured": true, 00:16:10.055 "data_offset": 2048, 00:16:10.055 "data_size": 63488 00:16:10.055 }, 00:16:10.055 { 00:16:10.055 "name": null, 00:16:10.055 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:10.055 "is_configured": false, 00:16:10.055 "data_offset": 2048, 00:16:10.055 "data_size": 63488 00:16:10.055 } 00:16:10.055 ] 00:16:10.055 }' 00:16:10.055 19:14:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:10.055 19:14:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.315 19:14:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:16:10.315 19:14:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:16:10.315 19:14:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:16:10.315 19:14:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:10.315 19:14:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.315 19:14:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.315 [2024-11-27 19:14:19.911165] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:10.315 [2024-11-27 19:14:19.911252] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:10.315 [2024-11-27 19:14:19.911276] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:16:10.315 [2024-11-27 19:14:19.911288] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:10.315 [2024-11-27 19:14:19.911896] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:10.315 [2024-11-27 19:14:19.911927] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:10.315 [2024-11-27 19:14:19.912025] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:16:10.315 [2024-11-27 19:14:19.912057] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:10.315 [2024-11-27 19:14:19.912183] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:16:10.315 [2024-11-27 19:14:19.912202] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:16:10.315 [2024-11-27 19:14:19.912487] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:10.315 [2024-11-27 19:14:19.917674] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:16:10.315 [2024-11-27 19:14:19.917714] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:16:10.315 [2024-11-27 19:14:19.918063] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:10.315 pt3 00:16:10.315 19:14:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.315 19:14:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:10.315 19:14:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:10.315 19:14:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:10.315 19:14:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:10.315 19:14:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:10.315 19:14:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:10.315 19:14:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:10.315 19:14:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:10.315 19:14:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:10.315 19:14:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:10.315 19:14:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:10.315 19:14:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.315 19:14:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.315 19:14:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:10.315 19:14:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.575 19:14:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:10.575 "name": "raid_bdev1", 00:16:10.575 "uuid": "5e5408d8-9ba1-4144-8c08-7b0d52293fcf", 00:16:10.575 "strip_size_kb": 64, 00:16:10.575 "state": "online", 00:16:10.575 "raid_level": "raid5f", 00:16:10.575 "superblock": true, 00:16:10.575 "num_base_bdevs": 3, 00:16:10.575 "num_base_bdevs_discovered": 2, 00:16:10.575 "num_base_bdevs_operational": 2, 00:16:10.575 "base_bdevs_list": [ 00:16:10.575 { 00:16:10.575 "name": null, 00:16:10.575 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:10.575 "is_configured": false, 00:16:10.575 "data_offset": 2048, 00:16:10.575 "data_size": 63488 00:16:10.575 }, 00:16:10.575 { 00:16:10.575 "name": "pt2", 00:16:10.575 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:10.575 "is_configured": true, 00:16:10.575 "data_offset": 2048, 00:16:10.575 "data_size": 63488 00:16:10.575 }, 00:16:10.575 { 00:16:10.575 "name": "pt3", 00:16:10.575 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:10.575 "is_configured": true, 00:16:10.575 "data_offset": 2048, 00:16:10.575 "data_size": 63488 00:16:10.575 } 00:16:10.575 ] 00:16:10.575 }' 00:16:10.575 19:14:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:10.575 19:14:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.835 19:14:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:10.835 19:14:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.835 19:14:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.835 [2024-11-27 19:14:20.369069] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:10.835 [2024-11-27 19:14:20.369103] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:10.835 [2024-11-27 19:14:20.369184] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:10.835 [2024-11-27 19:14:20.369250] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:10.835 [2024-11-27 19:14:20.369260] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:16:10.835 19:14:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.835 19:14:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:10.835 19:14:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:16:10.835 19:14:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.835 19:14:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.835 19:14:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.835 19:14:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:16:10.835 19:14:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:16:10.835 19:14:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:16:10.835 19:14:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:16:10.835 19:14:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:16:10.835 19:14:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.835 19:14:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.835 19:14:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.835 19:14:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:10.835 19:14:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.835 19:14:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.835 [2024-11-27 19:14:20.440943] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:10.835 [2024-11-27 19:14:20.441012] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:10.835 [2024-11-27 19:14:20.441031] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:16:10.835 [2024-11-27 19:14:20.441041] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:10.835 [2024-11-27 19:14:20.443660] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:10.835 [2024-11-27 19:14:20.443711] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:10.835 [2024-11-27 19:14:20.443797] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:10.835 [2024-11-27 19:14:20.443846] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:10.835 [2024-11-27 19:14:20.444024] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:16:10.835 [2024-11-27 19:14:20.444045] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:10.835 [2024-11-27 19:14:20.444063] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:16:10.835 [2024-11-27 19:14:20.444115] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:10.835 pt1 00:16:10.835 19:14:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.835 19:14:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:16:10.835 19:14:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:16:10.835 19:14:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:10.835 19:14:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:10.835 19:14:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:10.835 19:14:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:10.835 19:14:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:10.835 19:14:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:10.835 19:14:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:10.835 19:14:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:10.835 19:14:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:10.835 19:14:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:10.835 19:14:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:10.835 19:14:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.835 19:14:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.095 19:14:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.095 19:14:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:11.095 "name": "raid_bdev1", 00:16:11.095 "uuid": "5e5408d8-9ba1-4144-8c08-7b0d52293fcf", 00:16:11.095 "strip_size_kb": 64, 00:16:11.095 "state": "configuring", 00:16:11.095 "raid_level": "raid5f", 00:16:11.095 "superblock": true, 00:16:11.095 "num_base_bdevs": 3, 00:16:11.095 "num_base_bdevs_discovered": 1, 00:16:11.095 "num_base_bdevs_operational": 2, 00:16:11.095 "base_bdevs_list": [ 00:16:11.095 { 00:16:11.095 "name": null, 00:16:11.095 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:11.095 "is_configured": false, 00:16:11.095 "data_offset": 2048, 00:16:11.095 "data_size": 63488 00:16:11.095 }, 00:16:11.095 { 00:16:11.095 "name": "pt2", 00:16:11.095 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:11.095 "is_configured": true, 00:16:11.095 "data_offset": 2048, 00:16:11.095 "data_size": 63488 00:16:11.095 }, 00:16:11.095 { 00:16:11.095 "name": null, 00:16:11.095 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:11.095 "is_configured": false, 00:16:11.095 "data_offset": 2048, 00:16:11.095 "data_size": 63488 00:16:11.095 } 00:16:11.095 ] 00:16:11.095 }' 00:16:11.095 19:14:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:11.095 19:14:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.356 19:14:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:16:11.356 19:14:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:16:11.356 19:14:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.356 19:14:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.356 19:14:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.356 19:14:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:16:11.356 19:14:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:11.356 19:14:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.356 19:14:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.356 [2024-11-27 19:14:20.944110] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:11.356 [2024-11-27 19:14:20.944186] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:11.356 [2024-11-27 19:14:20.944210] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:16:11.356 [2024-11-27 19:14:20.944220] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:11.356 [2024-11-27 19:14:20.944843] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:11.356 [2024-11-27 19:14:20.944872] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:11.356 [2024-11-27 19:14:20.944979] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:16:11.356 [2024-11-27 19:14:20.945019] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:11.356 [2024-11-27 19:14:20.945193] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:16:11.356 [2024-11-27 19:14:20.945210] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:16:11.356 [2024-11-27 19:14:20.945518] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:16:11.356 [2024-11-27 19:14:20.951053] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:16:11.356 [2024-11-27 19:14:20.951082] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:16:11.356 [2024-11-27 19:14:20.951358] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:11.356 pt3 00:16:11.356 19:14:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.356 19:14:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:11.356 19:14:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:11.356 19:14:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:11.356 19:14:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:11.356 19:14:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:11.356 19:14:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:11.356 19:14:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:11.356 19:14:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:11.356 19:14:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:11.356 19:14:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:11.356 19:14:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:11.356 19:14:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.356 19:14:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.356 19:14:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:11.356 19:14:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.617 19:14:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:11.617 "name": "raid_bdev1", 00:16:11.617 "uuid": "5e5408d8-9ba1-4144-8c08-7b0d52293fcf", 00:16:11.617 "strip_size_kb": 64, 00:16:11.617 "state": "online", 00:16:11.617 "raid_level": "raid5f", 00:16:11.617 "superblock": true, 00:16:11.617 "num_base_bdevs": 3, 00:16:11.617 "num_base_bdevs_discovered": 2, 00:16:11.617 "num_base_bdevs_operational": 2, 00:16:11.617 "base_bdevs_list": [ 00:16:11.617 { 00:16:11.617 "name": null, 00:16:11.617 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:11.617 "is_configured": false, 00:16:11.617 "data_offset": 2048, 00:16:11.617 "data_size": 63488 00:16:11.617 }, 00:16:11.617 { 00:16:11.617 "name": "pt2", 00:16:11.617 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:11.617 "is_configured": true, 00:16:11.617 "data_offset": 2048, 00:16:11.617 "data_size": 63488 00:16:11.617 }, 00:16:11.617 { 00:16:11.617 "name": "pt3", 00:16:11.617 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:11.617 "is_configured": true, 00:16:11.617 "data_offset": 2048, 00:16:11.617 "data_size": 63488 00:16:11.617 } 00:16:11.617 ] 00:16:11.617 }' 00:16:11.617 19:14:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:11.617 19:14:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.877 19:14:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:16:11.877 19:14:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:16:11.877 19:14:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.877 19:14:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.877 19:14:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.877 19:14:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:16:11.877 19:14:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:11.877 19:14:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:16:11.877 19:14:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.877 19:14:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.877 [2024-11-27 19:14:21.466257] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:11.877 19:14:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.877 19:14:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 5e5408d8-9ba1-4144-8c08-7b0d52293fcf '!=' 5e5408d8-9ba1-4144-8c08-7b0d52293fcf ']' 00:16:11.877 19:14:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 81207 00:16:11.877 19:14:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 81207 ']' 00:16:11.877 19:14:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # kill -0 81207 00:16:11.877 19:14:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # uname 00:16:12.137 19:14:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:12.137 19:14:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81207 00:16:12.137 19:14:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:12.137 19:14:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:12.137 killing process with pid 81207 00:16:12.137 19:14:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81207' 00:16:12.137 19:14:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@973 -- # kill 81207 00:16:12.137 [2024-11-27 19:14:21.549597] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:12.137 [2024-11-27 19:14:21.549739] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:12.137 19:14:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@978 -- # wait 81207 00:16:12.137 [2024-11-27 19:14:21.549828] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:12.137 [2024-11-27 19:14:21.549844] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:16:12.402 [2024-11-27 19:14:21.880292] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:13.831 19:14:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:16:13.831 00:16:13.831 real 0m7.823s 00:16:13.831 user 0m12.001s 00:16:13.831 sys 0m1.551s 00:16:13.831 19:14:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:13.831 19:14:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.831 ************************************ 00:16:13.831 END TEST raid5f_superblock_test 00:16:13.831 ************************************ 00:16:13.831 19:14:23 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:16:13.831 19:14:23 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 3 false false true 00:16:13.831 19:14:23 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:16:13.831 19:14:23 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:13.831 19:14:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:13.831 ************************************ 00:16:13.831 START TEST raid5f_rebuild_test 00:16:13.831 ************************************ 00:16:13.831 19:14:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 3 false false true 00:16:13.831 19:14:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:16:13.831 19:14:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:16:13.831 19:14:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:16:13.831 19:14:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:16:13.831 19:14:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:16:13.831 19:14:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:13.831 19:14:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:13.831 19:14:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:13.831 19:14:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:13.831 19:14:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:13.831 19:14:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:13.831 19:14:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:13.831 19:14:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:13.831 19:14:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:16:13.831 19:14:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:13.831 19:14:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:13.831 19:14:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:16:13.831 19:14:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:13.831 19:14:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:13.831 19:14:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:13.831 19:14:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:13.831 19:14:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:13.831 19:14:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:13.831 19:14:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:16:13.831 19:14:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:16:13.831 19:14:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:16:13.831 19:14:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:16:13.831 19:14:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:16:13.831 19:14:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=81651 00:16:13.831 19:14:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:13.831 19:14:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 81651 00:16:13.831 19:14:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 81651 ']' 00:16:13.831 19:14:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:13.831 19:14:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:13.831 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:13.831 19:14:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:13.831 19:14:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:13.831 19:14:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.831 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:13.831 Zero copy mechanism will not be used. 00:16:13.831 [2024-11-27 19:14:23.254532] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:16:13.831 [2024-11-27 19:14:23.254648] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81651 ] 00:16:13.831 [2024-11-27 19:14:23.430746] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:14.091 [2024-11-27 19:14:23.559722] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:14.349 [2024-11-27 19:14:23.798638] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:14.349 [2024-11-27 19:14:23.798677] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:14.609 19:14:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:14.609 19:14:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:16:14.609 19:14:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:14.609 19:14:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:14.609 19:14:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.609 19:14:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.609 BaseBdev1_malloc 00:16:14.609 19:14:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.609 19:14:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:14.609 19:14:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.609 19:14:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.609 [2024-11-27 19:14:24.119425] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:14.609 [2024-11-27 19:14:24.119497] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:14.609 [2024-11-27 19:14:24.119521] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:14.609 [2024-11-27 19:14:24.119534] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:14.609 [2024-11-27 19:14:24.121950] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:14.609 [2024-11-27 19:14:24.121993] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:14.609 BaseBdev1 00:16:14.609 19:14:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.609 19:14:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:14.609 19:14:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:14.609 19:14:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.609 19:14:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.609 BaseBdev2_malloc 00:16:14.609 19:14:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.609 19:14:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:14.609 19:14:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.609 19:14:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.609 [2024-11-27 19:14:24.181106] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:14.609 [2024-11-27 19:14:24.181171] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:14.609 [2024-11-27 19:14:24.181195] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:14.609 [2024-11-27 19:14:24.181209] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:14.609 [2024-11-27 19:14:24.183707] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:14.609 [2024-11-27 19:14:24.183755] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:14.609 BaseBdev2 00:16:14.609 19:14:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.609 19:14:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:14.609 19:14:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:14.609 19:14:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.609 19:14:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.870 BaseBdev3_malloc 00:16:14.870 19:14:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.870 19:14:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:16:14.870 19:14:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.870 19:14:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.870 [2024-11-27 19:14:24.256255] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:16:14.870 [2024-11-27 19:14:24.256319] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:14.870 [2024-11-27 19:14:24.256346] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:14.870 [2024-11-27 19:14:24.256361] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:14.870 [2024-11-27 19:14:24.258821] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:14.870 [2024-11-27 19:14:24.258859] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:14.870 BaseBdev3 00:16:14.870 19:14:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.870 19:14:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:16:14.870 19:14:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.870 19:14:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.870 spare_malloc 00:16:14.870 19:14:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.870 19:14:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:14.870 19:14:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.871 19:14:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.871 spare_delay 00:16:14.871 19:14:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.871 19:14:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:14.871 19:14:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.871 19:14:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.871 [2024-11-27 19:14:24.329944] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:14.871 [2024-11-27 19:14:24.330013] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:14.871 [2024-11-27 19:14:24.330029] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:16:14.871 [2024-11-27 19:14:24.330040] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:14.871 [2024-11-27 19:14:24.332444] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:14.871 [2024-11-27 19:14:24.332485] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:14.871 spare 00:16:14.871 19:14:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.871 19:14:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:16:14.871 19:14:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.871 19:14:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.871 [2024-11-27 19:14:24.341992] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:14.871 [2024-11-27 19:14:24.344093] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:14.871 [2024-11-27 19:14:24.344162] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:14.871 [2024-11-27 19:14:24.344244] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:14.871 [2024-11-27 19:14:24.344256] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:16:14.871 [2024-11-27 19:14:24.344543] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:14.871 [2024-11-27 19:14:24.349824] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:14.871 [2024-11-27 19:14:24.349855] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:14.871 [2024-11-27 19:14:24.350028] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:14.871 19:14:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.871 19:14:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:14.871 19:14:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:14.871 19:14:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:14.871 19:14:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:14.871 19:14:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:14.871 19:14:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:14.871 19:14:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:14.871 19:14:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:14.871 19:14:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:14.871 19:14:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:14.871 19:14:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:14.871 19:14:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:14.871 19:14:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.871 19:14:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.871 19:14:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.871 19:14:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:14.871 "name": "raid_bdev1", 00:16:14.871 "uuid": "00f7a6de-a58c-4f6b-831a-68923fca682c", 00:16:14.871 "strip_size_kb": 64, 00:16:14.871 "state": "online", 00:16:14.871 "raid_level": "raid5f", 00:16:14.871 "superblock": false, 00:16:14.871 "num_base_bdevs": 3, 00:16:14.871 "num_base_bdevs_discovered": 3, 00:16:14.871 "num_base_bdevs_operational": 3, 00:16:14.871 "base_bdevs_list": [ 00:16:14.871 { 00:16:14.871 "name": "BaseBdev1", 00:16:14.871 "uuid": "173f77c6-8e85-5dd3-badb-10d3daf50107", 00:16:14.871 "is_configured": true, 00:16:14.871 "data_offset": 0, 00:16:14.871 "data_size": 65536 00:16:14.871 }, 00:16:14.871 { 00:16:14.871 "name": "BaseBdev2", 00:16:14.871 "uuid": "69831e74-3c26-541e-9f9b-74aceda7c722", 00:16:14.871 "is_configured": true, 00:16:14.871 "data_offset": 0, 00:16:14.871 "data_size": 65536 00:16:14.871 }, 00:16:14.871 { 00:16:14.871 "name": "BaseBdev3", 00:16:14.871 "uuid": "e882a88a-35bd-59c1-9c2a-659d8b62aca6", 00:16:14.871 "is_configured": true, 00:16:14.871 "data_offset": 0, 00:16:14.871 "data_size": 65536 00:16:14.871 } 00:16:14.871 ] 00:16:14.871 }' 00:16:14.871 19:14:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:14.871 19:14:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.441 19:14:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:15.441 19:14:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.441 19:14:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.441 19:14:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:15.441 [2024-11-27 19:14:24.808952] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:15.441 19:14:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.441 19:14:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=131072 00:16:15.441 19:14:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:15.441 19:14:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:15.441 19:14:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.441 19:14:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.441 19:14:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.441 19:14:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:16:15.441 19:14:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:16:15.441 19:14:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:16:15.441 19:14:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:16:15.441 19:14:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:16:15.441 19:14:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:15.441 19:14:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:16:15.441 19:14:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:15.441 19:14:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:15.441 19:14:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:15.441 19:14:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:16:15.441 19:14:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:15.441 19:14:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:15.441 19:14:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:16:15.441 [2024-11-27 19:14:25.064357] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:16:15.701 /dev/nbd0 00:16:15.701 19:14:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:15.701 19:14:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:15.701 19:14:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:15.701 19:14:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:16:15.701 19:14:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:15.701 19:14:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:15.701 19:14:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:15.701 19:14:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:16:15.701 19:14:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:15.701 19:14:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:15.701 19:14:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:15.701 1+0 records in 00:16:15.701 1+0 records out 00:16:15.701 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0003336 s, 12.3 MB/s 00:16:15.701 19:14:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:15.701 19:14:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:16:15.701 19:14:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:15.701 19:14:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:15.701 19:14:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:16:15.701 19:14:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:15.701 19:14:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:15.701 19:14:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:16:15.701 19:14:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:16:15.701 19:14:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 128 00:16:15.701 19:14:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=512 oflag=direct 00:16:15.961 512+0 records in 00:16:15.961 512+0 records out 00:16:15.961 67108864 bytes (67 MB, 64 MiB) copied, 0.39425 s, 170 MB/s 00:16:15.961 19:14:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:15.961 19:14:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:15.961 19:14:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:15.961 19:14:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:15.961 19:14:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:16:15.961 19:14:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:15.961 19:14:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:16.220 [2024-11-27 19:14:25.722483] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:16.220 19:14:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:16.220 19:14:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:16.220 19:14:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:16.220 19:14:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:16.220 19:14:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:16.220 19:14:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:16.220 19:14:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:16:16.220 19:14:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:16:16.220 19:14:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:16.220 19:14:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.220 19:14:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.220 [2024-11-27 19:14:25.758715] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:16.220 19:14:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.220 19:14:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:16.220 19:14:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:16.220 19:14:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:16.220 19:14:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:16.220 19:14:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:16.220 19:14:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:16.220 19:14:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:16.220 19:14:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:16.220 19:14:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:16.220 19:14:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:16.220 19:14:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:16.220 19:14:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:16.220 19:14:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.220 19:14:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.220 19:14:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.220 19:14:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:16.220 "name": "raid_bdev1", 00:16:16.220 "uuid": "00f7a6de-a58c-4f6b-831a-68923fca682c", 00:16:16.220 "strip_size_kb": 64, 00:16:16.220 "state": "online", 00:16:16.220 "raid_level": "raid5f", 00:16:16.220 "superblock": false, 00:16:16.220 "num_base_bdevs": 3, 00:16:16.220 "num_base_bdevs_discovered": 2, 00:16:16.220 "num_base_bdevs_operational": 2, 00:16:16.220 "base_bdevs_list": [ 00:16:16.220 { 00:16:16.220 "name": null, 00:16:16.220 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:16.220 "is_configured": false, 00:16:16.220 "data_offset": 0, 00:16:16.220 "data_size": 65536 00:16:16.220 }, 00:16:16.220 { 00:16:16.220 "name": "BaseBdev2", 00:16:16.220 "uuid": "69831e74-3c26-541e-9f9b-74aceda7c722", 00:16:16.220 "is_configured": true, 00:16:16.220 "data_offset": 0, 00:16:16.220 "data_size": 65536 00:16:16.220 }, 00:16:16.220 { 00:16:16.220 "name": "BaseBdev3", 00:16:16.220 "uuid": "e882a88a-35bd-59c1-9c2a-659d8b62aca6", 00:16:16.220 "is_configured": true, 00:16:16.220 "data_offset": 0, 00:16:16.220 "data_size": 65536 00:16:16.220 } 00:16:16.220 ] 00:16:16.220 }' 00:16:16.220 19:14:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:16.220 19:14:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.790 19:14:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:16.790 19:14:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.790 19:14:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.790 [2024-11-27 19:14:26.193939] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:16.790 [2024-11-27 19:14:26.211293] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b680 00:16:16.790 19:14:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.790 19:14:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:16.790 [2024-11-27 19:14:26.219227] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:17.729 19:14:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:17.729 19:14:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:17.729 19:14:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:17.729 19:14:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:17.729 19:14:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:17.729 19:14:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:17.729 19:14:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.730 19:14:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:17.730 19:14:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.730 19:14:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.730 19:14:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:17.730 "name": "raid_bdev1", 00:16:17.730 "uuid": "00f7a6de-a58c-4f6b-831a-68923fca682c", 00:16:17.730 "strip_size_kb": 64, 00:16:17.730 "state": "online", 00:16:17.730 "raid_level": "raid5f", 00:16:17.730 "superblock": false, 00:16:17.730 "num_base_bdevs": 3, 00:16:17.730 "num_base_bdevs_discovered": 3, 00:16:17.730 "num_base_bdevs_operational": 3, 00:16:17.730 "process": { 00:16:17.730 "type": "rebuild", 00:16:17.730 "target": "spare", 00:16:17.730 "progress": { 00:16:17.730 "blocks": 20480, 00:16:17.730 "percent": 15 00:16:17.730 } 00:16:17.730 }, 00:16:17.730 "base_bdevs_list": [ 00:16:17.730 { 00:16:17.730 "name": "spare", 00:16:17.730 "uuid": "32914129-eafc-52db-a226-4049093df466", 00:16:17.730 "is_configured": true, 00:16:17.730 "data_offset": 0, 00:16:17.730 "data_size": 65536 00:16:17.730 }, 00:16:17.730 { 00:16:17.730 "name": "BaseBdev2", 00:16:17.730 "uuid": "69831e74-3c26-541e-9f9b-74aceda7c722", 00:16:17.730 "is_configured": true, 00:16:17.730 "data_offset": 0, 00:16:17.730 "data_size": 65536 00:16:17.730 }, 00:16:17.730 { 00:16:17.730 "name": "BaseBdev3", 00:16:17.730 "uuid": "e882a88a-35bd-59c1-9c2a-659d8b62aca6", 00:16:17.730 "is_configured": true, 00:16:17.730 "data_offset": 0, 00:16:17.730 "data_size": 65536 00:16:17.730 } 00:16:17.730 ] 00:16:17.730 }' 00:16:17.730 19:14:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:17.730 19:14:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:17.730 19:14:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:17.990 19:14:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:17.990 19:14:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:17.990 19:14:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.990 19:14:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.990 [2024-11-27 19:14:27.381910] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:17.990 [2024-11-27 19:14:27.429785] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:17.990 [2024-11-27 19:14:27.429895] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:17.990 [2024-11-27 19:14:27.429937] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:17.990 [2024-11-27 19:14:27.429960] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:17.990 19:14:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.990 19:14:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:17.990 19:14:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:17.990 19:14:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:17.990 19:14:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:17.990 19:14:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:17.990 19:14:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:17.990 19:14:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:17.990 19:14:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:17.990 19:14:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:17.990 19:14:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:17.990 19:14:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:17.990 19:14:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:17.990 19:14:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.990 19:14:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.990 19:14:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.990 19:14:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:17.990 "name": "raid_bdev1", 00:16:17.990 "uuid": "00f7a6de-a58c-4f6b-831a-68923fca682c", 00:16:17.990 "strip_size_kb": 64, 00:16:17.990 "state": "online", 00:16:17.990 "raid_level": "raid5f", 00:16:17.990 "superblock": false, 00:16:17.990 "num_base_bdevs": 3, 00:16:17.990 "num_base_bdevs_discovered": 2, 00:16:17.990 "num_base_bdevs_operational": 2, 00:16:17.990 "base_bdevs_list": [ 00:16:17.990 { 00:16:17.990 "name": null, 00:16:17.990 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:17.990 "is_configured": false, 00:16:17.990 "data_offset": 0, 00:16:17.990 "data_size": 65536 00:16:17.990 }, 00:16:17.990 { 00:16:17.990 "name": "BaseBdev2", 00:16:17.990 "uuid": "69831e74-3c26-541e-9f9b-74aceda7c722", 00:16:17.990 "is_configured": true, 00:16:17.990 "data_offset": 0, 00:16:17.990 "data_size": 65536 00:16:17.990 }, 00:16:17.990 { 00:16:17.990 "name": "BaseBdev3", 00:16:17.990 "uuid": "e882a88a-35bd-59c1-9c2a-659d8b62aca6", 00:16:17.990 "is_configured": true, 00:16:17.990 "data_offset": 0, 00:16:17.990 "data_size": 65536 00:16:17.990 } 00:16:17.990 ] 00:16:17.990 }' 00:16:17.990 19:14:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:17.990 19:14:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.560 19:14:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:18.560 19:14:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:18.560 19:14:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:18.560 19:14:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:18.560 19:14:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:18.560 19:14:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:18.560 19:14:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.560 19:14:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.560 19:14:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:18.560 19:14:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.560 19:14:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:18.560 "name": "raid_bdev1", 00:16:18.560 "uuid": "00f7a6de-a58c-4f6b-831a-68923fca682c", 00:16:18.560 "strip_size_kb": 64, 00:16:18.560 "state": "online", 00:16:18.560 "raid_level": "raid5f", 00:16:18.560 "superblock": false, 00:16:18.560 "num_base_bdevs": 3, 00:16:18.560 "num_base_bdevs_discovered": 2, 00:16:18.560 "num_base_bdevs_operational": 2, 00:16:18.560 "base_bdevs_list": [ 00:16:18.560 { 00:16:18.560 "name": null, 00:16:18.560 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:18.560 "is_configured": false, 00:16:18.560 "data_offset": 0, 00:16:18.560 "data_size": 65536 00:16:18.560 }, 00:16:18.560 { 00:16:18.560 "name": "BaseBdev2", 00:16:18.560 "uuid": "69831e74-3c26-541e-9f9b-74aceda7c722", 00:16:18.560 "is_configured": true, 00:16:18.560 "data_offset": 0, 00:16:18.560 "data_size": 65536 00:16:18.560 }, 00:16:18.560 { 00:16:18.560 "name": "BaseBdev3", 00:16:18.560 "uuid": "e882a88a-35bd-59c1-9c2a-659d8b62aca6", 00:16:18.560 "is_configured": true, 00:16:18.560 "data_offset": 0, 00:16:18.560 "data_size": 65536 00:16:18.560 } 00:16:18.560 ] 00:16:18.560 }' 00:16:18.560 19:14:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:18.560 19:14:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:18.560 19:14:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:18.560 19:14:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:18.560 19:14:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:18.560 19:14:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.560 19:14:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.560 [2024-11-27 19:14:28.035208] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:18.560 [2024-11-27 19:14:28.051279] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:16:18.560 19:14:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.560 19:14:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:18.560 [2024-11-27 19:14:28.058815] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:19.499 19:14:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:19.499 19:14:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:19.499 19:14:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:19.499 19:14:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:19.499 19:14:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:19.499 19:14:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:19.499 19:14:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:19.499 19:14:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.499 19:14:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.499 19:14:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.499 19:14:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:19.499 "name": "raid_bdev1", 00:16:19.499 "uuid": "00f7a6de-a58c-4f6b-831a-68923fca682c", 00:16:19.499 "strip_size_kb": 64, 00:16:19.499 "state": "online", 00:16:19.499 "raid_level": "raid5f", 00:16:19.499 "superblock": false, 00:16:19.499 "num_base_bdevs": 3, 00:16:19.499 "num_base_bdevs_discovered": 3, 00:16:19.499 "num_base_bdevs_operational": 3, 00:16:19.499 "process": { 00:16:19.499 "type": "rebuild", 00:16:19.499 "target": "spare", 00:16:19.499 "progress": { 00:16:19.499 "blocks": 20480, 00:16:19.499 "percent": 15 00:16:19.499 } 00:16:19.499 }, 00:16:19.499 "base_bdevs_list": [ 00:16:19.499 { 00:16:19.499 "name": "spare", 00:16:19.499 "uuid": "32914129-eafc-52db-a226-4049093df466", 00:16:19.499 "is_configured": true, 00:16:19.499 "data_offset": 0, 00:16:19.499 "data_size": 65536 00:16:19.499 }, 00:16:19.499 { 00:16:19.499 "name": "BaseBdev2", 00:16:19.499 "uuid": "69831e74-3c26-541e-9f9b-74aceda7c722", 00:16:19.499 "is_configured": true, 00:16:19.499 "data_offset": 0, 00:16:19.499 "data_size": 65536 00:16:19.499 }, 00:16:19.499 { 00:16:19.499 "name": "BaseBdev3", 00:16:19.499 "uuid": "e882a88a-35bd-59c1-9c2a-659d8b62aca6", 00:16:19.499 "is_configured": true, 00:16:19.499 "data_offset": 0, 00:16:19.499 "data_size": 65536 00:16:19.499 } 00:16:19.499 ] 00:16:19.499 }' 00:16:19.499 19:14:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:19.764 19:14:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:19.764 19:14:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:19.764 19:14:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:19.764 19:14:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:16:19.764 19:14:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:16:19.764 19:14:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:16:19.764 19:14:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=551 00:16:19.764 19:14:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:19.764 19:14:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:19.764 19:14:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:19.764 19:14:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:19.764 19:14:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:19.764 19:14:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:19.764 19:14:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:19.764 19:14:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.764 19:14:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:19.764 19:14:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.764 19:14:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.764 19:14:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:19.765 "name": "raid_bdev1", 00:16:19.765 "uuid": "00f7a6de-a58c-4f6b-831a-68923fca682c", 00:16:19.765 "strip_size_kb": 64, 00:16:19.765 "state": "online", 00:16:19.765 "raid_level": "raid5f", 00:16:19.765 "superblock": false, 00:16:19.765 "num_base_bdevs": 3, 00:16:19.765 "num_base_bdevs_discovered": 3, 00:16:19.765 "num_base_bdevs_operational": 3, 00:16:19.765 "process": { 00:16:19.765 "type": "rebuild", 00:16:19.765 "target": "spare", 00:16:19.765 "progress": { 00:16:19.765 "blocks": 22528, 00:16:19.765 "percent": 17 00:16:19.765 } 00:16:19.765 }, 00:16:19.765 "base_bdevs_list": [ 00:16:19.765 { 00:16:19.765 "name": "spare", 00:16:19.765 "uuid": "32914129-eafc-52db-a226-4049093df466", 00:16:19.765 "is_configured": true, 00:16:19.765 "data_offset": 0, 00:16:19.765 "data_size": 65536 00:16:19.765 }, 00:16:19.765 { 00:16:19.765 "name": "BaseBdev2", 00:16:19.765 "uuid": "69831e74-3c26-541e-9f9b-74aceda7c722", 00:16:19.765 "is_configured": true, 00:16:19.765 "data_offset": 0, 00:16:19.765 "data_size": 65536 00:16:19.765 }, 00:16:19.765 { 00:16:19.765 "name": "BaseBdev3", 00:16:19.765 "uuid": "e882a88a-35bd-59c1-9c2a-659d8b62aca6", 00:16:19.765 "is_configured": true, 00:16:19.765 "data_offset": 0, 00:16:19.765 "data_size": 65536 00:16:19.765 } 00:16:19.765 ] 00:16:19.765 }' 00:16:19.765 19:14:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:19.765 19:14:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:19.765 19:14:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:19.765 19:14:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:19.765 19:14:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:21.147 19:14:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:21.147 19:14:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:21.147 19:14:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:21.147 19:14:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:21.147 19:14:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:21.147 19:14:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:21.147 19:14:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:21.147 19:14:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:21.147 19:14:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.147 19:14:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.147 19:14:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.147 19:14:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:21.147 "name": "raid_bdev1", 00:16:21.147 "uuid": "00f7a6de-a58c-4f6b-831a-68923fca682c", 00:16:21.147 "strip_size_kb": 64, 00:16:21.147 "state": "online", 00:16:21.147 "raid_level": "raid5f", 00:16:21.147 "superblock": false, 00:16:21.147 "num_base_bdevs": 3, 00:16:21.147 "num_base_bdevs_discovered": 3, 00:16:21.147 "num_base_bdevs_operational": 3, 00:16:21.147 "process": { 00:16:21.147 "type": "rebuild", 00:16:21.147 "target": "spare", 00:16:21.147 "progress": { 00:16:21.147 "blocks": 45056, 00:16:21.147 "percent": 34 00:16:21.147 } 00:16:21.147 }, 00:16:21.147 "base_bdevs_list": [ 00:16:21.147 { 00:16:21.147 "name": "spare", 00:16:21.147 "uuid": "32914129-eafc-52db-a226-4049093df466", 00:16:21.147 "is_configured": true, 00:16:21.147 "data_offset": 0, 00:16:21.147 "data_size": 65536 00:16:21.147 }, 00:16:21.147 { 00:16:21.147 "name": "BaseBdev2", 00:16:21.147 "uuid": "69831e74-3c26-541e-9f9b-74aceda7c722", 00:16:21.147 "is_configured": true, 00:16:21.147 "data_offset": 0, 00:16:21.147 "data_size": 65536 00:16:21.147 }, 00:16:21.147 { 00:16:21.147 "name": "BaseBdev3", 00:16:21.147 "uuid": "e882a88a-35bd-59c1-9c2a-659d8b62aca6", 00:16:21.147 "is_configured": true, 00:16:21.147 "data_offset": 0, 00:16:21.147 "data_size": 65536 00:16:21.147 } 00:16:21.147 ] 00:16:21.147 }' 00:16:21.147 19:14:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:21.147 19:14:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:21.147 19:14:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:21.147 19:14:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:21.147 19:14:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:22.087 19:14:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:22.087 19:14:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:22.087 19:14:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:22.087 19:14:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:22.087 19:14:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:22.087 19:14:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:22.087 19:14:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:22.087 19:14:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:22.087 19:14:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.087 19:14:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.087 19:14:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.087 19:14:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:22.087 "name": "raid_bdev1", 00:16:22.087 "uuid": "00f7a6de-a58c-4f6b-831a-68923fca682c", 00:16:22.087 "strip_size_kb": 64, 00:16:22.087 "state": "online", 00:16:22.087 "raid_level": "raid5f", 00:16:22.087 "superblock": false, 00:16:22.087 "num_base_bdevs": 3, 00:16:22.087 "num_base_bdevs_discovered": 3, 00:16:22.087 "num_base_bdevs_operational": 3, 00:16:22.087 "process": { 00:16:22.087 "type": "rebuild", 00:16:22.087 "target": "spare", 00:16:22.087 "progress": { 00:16:22.087 "blocks": 69632, 00:16:22.087 "percent": 53 00:16:22.087 } 00:16:22.087 }, 00:16:22.087 "base_bdevs_list": [ 00:16:22.087 { 00:16:22.087 "name": "spare", 00:16:22.087 "uuid": "32914129-eafc-52db-a226-4049093df466", 00:16:22.087 "is_configured": true, 00:16:22.087 "data_offset": 0, 00:16:22.087 "data_size": 65536 00:16:22.087 }, 00:16:22.087 { 00:16:22.087 "name": "BaseBdev2", 00:16:22.087 "uuid": "69831e74-3c26-541e-9f9b-74aceda7c722", 00:16:22.087 "is_configured": true, 00:16:22.087 "data_offset": 0, 00:16:22.087 "data_size": 65536 00:16:22.087 }, 00:16:22.087 { 00:16:22.087 "name": "BaseBdev3", 00:16:22.087 "uuid": "e882a88a-35bd-59c1-9c2a-659d8b62aca6", 00:16:22.087 "is_configured": true, 00:16:22.087 "data_offset": 0, 00:16:22.087 "data_size": 65536 00:16:22.087 } 00:16:22.087 ] 00:16:22.087 }' 00:16:22.087 19:14:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:22.087 19:14:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:22.087 19:14:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:22.087 19:14:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:22.087 19:14:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:23.027 19:14:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:23.027 19:14:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:23.027 19:14:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:23.027 19:14:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:23.027 19:14:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:23.027 19:14:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:23.027 19:14:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:23.027 19:14:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:23.027 19:14:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.027 19:14:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.287 19:14:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.287 19:14:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:23.287 "name": "raid_bdev1", 00:16:23.287 "uuid": "00f7a6de-a58c-4f6b-831a-68923fca682c", 00:16:23.287 "strip_size_kb": 64, 00:16:23.287 "state": "online", 00:16:23.287 "raid_level": "raid5f", 00:16:23.287 "superblock": false, 00:16:23.287 "num_base_bdevs": 3, 00:16:23.287 "num_base_bdevs_discovered": 3, 00:16:23.287 "num_base_bdevs_operational": 3, 00:16:23.287 "process": { 00:16:23.287 "type": "rebuild", 00:16:23.287 "target": "spare", 00:16:23.287 "progress": { 00:16:23.287 "blocks": 92160, 00:16:23.287 "percent": 70 00:16:23.287 } 00:16:23.287 }, 00:16:23.287 "base_bdevs_list": [ 00:16:23.287 { 00:16:23.287 "name": "spare", 00:16:23.287 "uuid": "32914129-eafc-52db-a226-4049093df466", 00:16:23.287 "is_configured": true, 00:16:23.287 "data_offset": 0, 00:16:23.287 "data_size": 65536 00:16:23.287 }, 00:16:23.287 { 00:16:23.287 "name": "BaseBdev2", 00:16:23.287 "uuid": "69831e74-3c26-541e-9f9b-74aceda7c722", 00:16:23.287 "is_configured": true, 00:16:23.287 "data_offset": 0, 00:16:23.287 "data_size": 65536 00:16:23.287 }, 00:16:23.287 { 00:16:23.287 "name": "BaseBdev3", 00:16:23.287 "uuid": "e882a88a-35bd-59c1-9c2a-659d8b62aca6", 00:16:23.287 "is_configured": true, 00:16:23.287 "data_offset": 0, 00:16:23.287 "data_size": 65536 00:16:23.287 } 00:16:23.287 ] 00:16:23.287 }' 00:16:23.287 19:14:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:23.287 19:14:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:23.287 19:14:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:23.287 19:14:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:23.287 19:14:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:24.227 19:14:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:24.227 19:14:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:24.227 19:14:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:24.227 19:14:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:24.227 19:14:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:24.227 19:14:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:24.227 19:14:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:24.227 19:14:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:24.227 19:14:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.227 19:14:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.227 19:14:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.227 19:14:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:24.227 "name": "raid_bdev1", 00:16:24.227 "uuid": "00f7a6de-a58c-4f6b-831a-68923fca682c", 00:16:24.227 "strip_size_kb": 64, 00:16:24.227 "state": "online", 00:16:24.227 "raid_level": "raid5f", 00:16:24.227 "superblock": false, 00:16:24.227 "num_base_bdevs": 3, 00:16:24.227 "num_base_bdevs_discovered": 3, 00:16:24.227 "num_base_bdevs_operational": 3, 00:16:24.227 "process": { 00:16:24.227 "type": "rebuild", 00:16:24.227 "target": "spare", 00:16:24.227 "progress": { 00:16:24.227 "blocks": 116736, 00:16:24.227 "percent": 89 00:16:24.227 } 00:16:24.227 }, 00:16:24.227 "base_bdevs_list": [ 00:16:24.227 { 00:16:24.227 "name": "spare", 00:16:24.227 "uuid": "32914129-eafc-52db-a226-4049093df466", 00:16:24.227 "is_configured": true, 00:16:24.227 "data_offset": 0, 00:16:24.227 "data_size": 65536 00:16:24.227 }, 00:16:24.227 { 00:16:24.227 "name": "BaseBdev2", 00:16:24.227 "uuid": "69831e74-3c26-541e-9f9b-74aceda7c722", 00:16:24.227 "is_configured": true, 00:16:24.227 "data_offset": 0, 00:16:24.227 "data_size": 65536 00:16:24.227 }, 00:16:24.227 { 00:16:24.227 "name": "BaseBdev3", 00:16:24.227 "uuid": "e882a88a-35bd-59c1-9c2a-659d8b62aca6", 00:16:24.227 "is_configured": true, 00:16:24.227 "data_offset": 0, 00:16:24.227 "data_size": 65536 00:16:24.227 } 00:16:24.227 ] 00:16:24.227 }' 00:16:24.227 19:14:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:24.487 19:14:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:24.487 19:14:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:24.487 19:14:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:24.487 19:14:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:25.057 [2024-11-27 19:14:34.509020] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:25.057 [2024-11-27 19:14:34.509105] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:25.057 [2024-11-27 19:14:34.509149] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:25.316 19:14:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:25.316 19:14:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:25.316 19:14:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:25.316 19:14:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:25.316 19:14:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:25.316 19:14:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:25.316 19:14:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:25.316 19:14:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:25.316 19:14:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.316 19:14:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.575 19:14:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.575 19:14:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:25.575 "name": "raid_bdev1", 00:16:25.575 "uuid": "00f7a6de-a58c-4f6b-831a-68923fca682c", 00:16:25.575 "strip_size_kb": 64, 00:16:25.575 "state": "online", 00:16:25.575 "raid_level": "raid5f", 00:16:25.575 "superblock": false, 00:16:25.575 "num_base_bdevs": 3, 00:16:25.575 "num_base_bdevs_discovered": 3, 00:16:25.575 "num_base_bdevs_operational": 3, 00:16:25.575 "base_bdevs_list": [ 00:16:25.575 { 00:16:25.575 "name": "spare", 00:16:25.575 "uuid": "32914129-eafc-52db-a226-4049093df466", 00:16:25.575 "is_configured": true, 00:16:25.575 "data_offset": 0, 00:16:25.575 "data_size": 65536 00:16:25.575 }, 00:16:25.575 { 00:16:25.575 "name": "BaseBdev2", 00:16:25.575 "uuid": "69831e74-3c26-541e-9f9b-74aceda7c722", 00:16:25.575 "is_configured": true, 00:16:25.575 "data_offset": 0, 00:16:25.575 "data_size": 65536 00:16:25.575 }, 00:16:25.575 { 00:16:25.575 "name": "BaseBdev3", 00:16:25.575 "uuid": "e882a88a-35bd-59c1-9c2a-659d8b62aca6", 00:16:25.575 "is_configured": true, 00:16:25.575 "data_offset": 0, 00:16:25.575 "data_size": 65536 00:16:25.575 } 00:16:25.575 ] 00:16:25.575 }' 00:16:25.575 19:14:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:25.575 19:14:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:25.575 19:14:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:25.575 19:14:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:25.575 19:14:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:16:25.575 19:14:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:25.575 19:14:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:25.575 19:14:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:25.575 19:14:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:25.575 19:14:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:25.575 19:14:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:25.575 19:14:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:25.575 19:14:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.575 19:14:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.575 19:14:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.575 19:14:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:25.575 "name": "raid_bdev1", 00:16:25.575 "uuid": "00f7a6de-a58c-4f6b-831a-68923fca682c", 00:16:25.575 "strip_size_kb": 64, 00:16:25.575 "state": "online", 00:16:25.575 "raid_level": "raid5f", 00:16:25.575 "superblock": false, 00:16:25.575 "num_base_bdevs": 3, 00:16:25.575 "num_base_bdevs_discovered": 3, 00:16:25.575 "num_base_bdevs_operational": 3, 00:16:25.575 "base_bdevs_list": [ 00:16:25.575 { 00:16:25.575 "name": "spare", 00:16:25.575 "uuid": "32914129-eafc-52db-a226-4049093df466", 00:16:25.575 "is_configured": true, 00:16:25.575 "data_offset": 0, 00:16:25.575 "data_size": 65536 00:16:25.575 }, 00:16:25.575 { 00:16:25.575 "name": "BaseBdev2", 00:16:25.575 "uuid": "69831e74-3c26-541e-9f9b-74aceda7c722", 00:16:25.575 "is_configured": true, 00:16:25.575 "data_offset": 0, 00:16:25.575 "data_size": 65536 00:16:25.575 }, 00:16:25.575 { 00:16:25.575 "name": "BaseBdev3", 00:16:25.575 "uuid": "e882a88a-35bd-59c1-9c2a-659d8b62aca6", 00:16:25.575 "is_configured": true, 00:16:25.575 "data_offset": 0, 00:16:25.575 "data_size": 65536 00:16:25.575 } 00:16:25.575 ] 00:16:25.575 }' 00:16:25.575 19:14:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:25.575 19:14:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:25.575 19:14:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:25.835 19:14:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:25.835 19:14:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:25.835 19:14:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:25.835 19:14:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:25.835 19:14:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:25.835 19:14:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:25.835 19:14:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:25.835 19:14:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:25.835 19:14:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:25.835 19:14:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:25.835 19:14:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:25.835 19:14:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:25.835 19:14:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:25.835 19:14:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.835 19:14:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.835 19:14:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.835 19:14:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:25.835 "name": "raid_bdev1", 00:16:25.835 "uuid": "00f7a6de-a58c-4f6b-831a-68923fca682c", 00:16:25.835 "strip_size_kb": 64, 00:16:25.835 "state": "online", 00:16:25.835 "raid_level": "raid5f", 00:16:25.835 "superblock": false, 00:16:25.835 "num_base_bdevs": 3, 00:16:25.835 "num_base_bdevs_discovered": 3, 00:16:25.835 "num_base_bdevs_operational": 3, 00:16:25.835 "base_bdevs_list": [ 00:16:25.835 { 00:16:25.835 "name": "spare", 00:16:25.835 "uuid": "32914129-eafc-52db-a226-4049093df466", 00:16:25.835 "is_configured": true, 00:16:25.835 "data_offset": 0, 00:16:25.835 "data_size": 65536 00:16:25.835 }, 00:16:25.835 { 00:16:25.835 "name": "BaseBdev2", 00:16:25.835 "uuid": "69831e74-3c26-541e-9f9b-74aceda7c722", 00:16:25.835 "is_configured": true, 00:16:25.835 "data_offset": 0, 00:16:25.835 "data_size": 65536 00:16:25.835 }, 00:16:25.835 { 00:16:25.835 "name": "BaseBdev3", 00:16:25.835 "uuid": "e882a88a-35bd-59c1-9c2a-659d8b62aca6", 00:16:25.835 "is_configured": true, 00:16:25.835 "data_offset": 0, 00:16:25.835 "data_size": 65536 00:16:25.835 } 00:16:25.835 ] 00:16:25.835 }' 00:16:25.835 19:14:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:25.835 19:14:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.094 19:14:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:26.094 19:14:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.094 19:14:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.094 [2024-11-27 19:14:35.688356] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:26.094 [2024-11-27 19:14:35.688391] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:26.094 [2024-11-27 19:14:35.688488] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:26.094 [2024-11-27 19:14:35.688581] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:26.094 [2024-11-27 19:14:35.688599] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:26.094 19:14:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.094 19:14:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:26.094 19:14:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:16:26.094 19:14:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.094 19:14:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.094 19:14:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.095 19:14:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:26.095 19:14:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:26.095 19:14:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:16:26.095 19:14:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:16:26.095 19:14:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:26.095 19:14:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:16:26.095 19:14:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:26.095 19:14:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:26.095 19:14:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:26.095 19:14:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:16:26.095 19:14:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:26.095 19:14:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:26.354 19:14:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:16:26.354 /dev/nbd0 00:16:26.354 19:14:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:26.354 19:14:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:26.354 19:14:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:26.354 19:14:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:16:26.354 19:14:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:26.354 19:14:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:26.354 19:14:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:26.354 19:14:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:16:26.354 19:14:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:26.354 19:14:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:26.354 19:14:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:26.354 1+0 records in 00:16:26.354 1+0 records out 00:16:26.354 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000549851 s, 7.4 MB/s 00:16:26.354 19:14:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:26.354 19:14:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:16:26.354 19:14:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:26.354 19:14:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:26.354 19:14:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:16:26.354 19:14:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:26.354 19:14:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:26.354 19:14:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:16:26.613 /dev/nbd1 00:16:26.613 19:14:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:26.613 19:14:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:26.613 19:14:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:16:26.613 19:14:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:16:26.613 19:14:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:26.613 19:14:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:26.613 19:14:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:16:26.614 19:14:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:16:26.614 19:14:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:26.614 19:14:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:26.614 19:14:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:26.614 1+0 records in 00:16:26.614 1+0 records out 00:16:26.614 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000539284 s, 7.6 MB/s 00:16:26.614 19:14:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:26.614 19:14:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:16:26.614 19:14:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:26.614 19:14:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:26.614 19:14:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:16:26.614 19:14:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:26.614 19:14:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:26.614 19:14:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:16:26.873 19:14:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:16:26.873 19:14:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:26.873 19:14:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:26.873 19:14:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:26.873 19:14:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:16:26.873 19:14:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:26.873 19:14:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:27.132 19:14:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:27.132 19:14:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:27.132 19:14:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:27.132 19:14:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:27.132 19:14:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:27.132 19:14:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:27.132 19:14:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:16:27.132 19:14:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:16:27.132 19:14:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:27.132 19:14:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:27.392 19:14:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:27.392 19:14:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:27.392 19:14:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:27.392 19:14:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:27.392 19:14:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:27.392 19:14:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:27.392 19:14:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:16:27.392 19:14:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:16:27.392 19:14:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:16:27.392 19:14:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 81651 00:16:27.392 19:14:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 81651 ']' 00:16:27.392 19:14:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 81651 00:16:27.392 19:14:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:16:27.392 19:14:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:27.392 19:14:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81651 00:16:27.392 19:14:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:27.392 19:14:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:27.392 19:14:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81651' 00:16:27.392 killing process with pid 81651 00:16:27.392 19:14:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@973 -- # kill 81651 00:16:27.392 Received shutdown signal, test time was about 60.000000 seconds 00:16:27.392 00:16:27.392 Latency(us) 00:16:27.392 [2024-11-27T19:14:37.028Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:27.392 [2024-11-27T19:14:37.028Z] =================================================================================================================== 00:16:27.392 [2024-11-27T19:14:37.028Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:27.392 [2024-11-27 19:14:36.842138] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:27.392 19:14:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@978 -- # wait 81651 00:16:27.652 [2024-11-27 19:14:37.251989] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:29.034 19:14:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:16:29.034 00:16:29.034 real 0m15.274s 00:16:29.034 user 0m18.566s 00:16:29.034 sys 0m2.127s 00:16:29.034 19:14:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:29.034 19:14:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.034 ************************************ 00:16:29.034 END TEST raid5f_rebuild_test 00:16:29.034 ************************************ 00:16:29.034 19:14:38 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 3 true false true 00:16:29.034 19:14:38 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:16:29.034 19:14:38 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:29.034 19:14:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:29.034 ************************************ 00:16:29.034 START TEST raid5f_rebuild_test_sb 00:16:29.034 ************************************ 00:16:29.034 19:14:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 3 true false true 00:16:29.034 19:14:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:16:29.034 19:14:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:16:29.034 19:14:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:16:29.034 19:14:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:16:29.034 19:14:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:16:29.034 19:14:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:29.034 19:14:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:29.034 19:14:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:29.034 19:14:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:29.034 19:14:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:29.034 19:14:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:29.034 19:14:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:29.034 19:14:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:29.034 19:14:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:16:29.034 19:14:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:29.034 19:14:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:29.034 19:14:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:16:29.034 19:14:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:29.034 19:14:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:29.034 19:14:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:29.034 19:14:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:29.034 19:14:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:29.034 19:14:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:29.034 19:14:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:16:29.034 19:14:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:16:29.034 19:14:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:16:29.034 19:14:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:16:29.034 19:14:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:16:29.034 19:14:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:16:29.034 19:14:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=82086 00:16:29.034 19:14:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 82086 00:16:29.034 19:14:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:29.034 19:14:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 82086 ']' 00:16:29.034 19:14:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:29.034 19:14:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:29.034 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:29.034 19:14:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:29.034 19:14:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:29.034 19:14:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.034 [2024-11-27 19:14:38.608477] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:16:29.034 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:29.034 Zero copy mechanism will not be used. 00:16:29.034 [2024-11-27 19:14:38.608684] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82086 ] 00:16:29.294 [2024-11-27 19:14:38.777940] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:29.294 [2024-11-27 19:14:38.907751] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:29.554 [2024-11-27 19:14:39.136940] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:29.554 [2024-11-27 19:14:39.136997] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:29.812 19:14:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:29.812 19:14:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:16:29.812 19:14:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:29.812 19:14:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:29.812 19:14:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.812 19:14:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.071 BaseBdev1_malloc 00:16:30.071 19:14:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.071 19:14:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:30.071 19:14:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.071 19:14:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.071 [2024-11-27 19:14:39.474824] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:30.071 [2024-11-27 19:14:39.474895] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:30.071 [2024-11-27 19:14:39.474921] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:30.071 [2024-11-27 19:14:39.474934] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:30.071 [2024-11-27 19:14:39.477361] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:30.071 [2024-11-27 19:14:39.477507] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:30.071 BaseBdev1 00:16:30.071 19:14:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.071 19:14:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:30.071 19:14:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:30.071 19:14:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.071 19:14:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.071 BaseBdev2_malloc 00:16:30.071 19:14:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.071 19:14:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:30.071 19:14:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.071 19:14:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.071 [2024-11-27 19:14:39.533277] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:30.071 [2024-11-27 19:14:39.533339] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:30.071 [2024-11-27 19:14:39.533365] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:30.071 [2024-11-27 19:14:39.533378] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:30.071 [2024-11-27 19:14:39.535668] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:30.071 [2024-11-27 19:14:39.535802] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:30.071 BaseBdev2 00:16:30.071 19:14:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.071 19:14:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:30.071 19:14:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:30.071 19:14:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.071 19:14:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.071 BaseBdev3_malloc 00:16:30.071 19:14:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.071 19:14:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:16:30.071 19:14:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.072 19:14:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.072 [2024-11-27 19:14:39.629462] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:16:30.072 [2024-11-27 19:14:39.629518] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:30.072 [2024-11-27 19:14:39.629544] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:30.072 [2024-11-27 19:14:39.629557] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:30.072 [2024-11-27 19:14:39.631920] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:30.072 [2024-11-27 19:14:39.632031] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:30.072 BaseBdev3 00:16:30.072 19:14:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.072 19:14:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:16:30.072 19:14:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.072 19:14:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.072 spare_malloc 00:16:30.072 19:14:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.072 19:14:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:30.072 19:14:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.072 19:14:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.072 spare_delay 00:16:30.072 19:14:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.072 19:14:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:30.072 19:14:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.072 19:14:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.072 [2024-11-27 19:14:39.702032] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:30.072 [2024-11-27 19:14:39.702086] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:30.072 [2024-11-27 19:14:39.702103] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:16:30.072 [2024-11-27 19:14:39.702115] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:30.072 [2024-11-27 19:14:39.704473] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:30.072 [2024-11-27 19:14:39.704516] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:30.331 spare 00:16:30.331 19:14:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.331 19:14:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:16:30.331 19:14:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.331 19:14:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.331 [2024-11-27 19:14:39.714077] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:30.331 [2024-11-27 19:14:39.716175] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:30.331 [2024-11-27 19:14:39.716239] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:30.331 [2024-11-27 19:14:39.716428] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:30.331 [2024-11-27 19:14:39.716441] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:16:30.331 [2024-11-27 19:14:39.716685] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:30.331 [2024-11-27 19:14:39.722141] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:30.331 [2024-11-27 19:14:39.722166] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:30.331 [2024-11-27 19:14:39.722346] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:30.331 19:14:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.331 19:14:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:30.331 19:14:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:30.331 19:14:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:30.331 19:14:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:30.331 19:14:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:30.331 19:14:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:30.331 19:14:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:30.331 19:14:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:30.331 19:14:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:30.331 19:14:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:30.331 19:14:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:30.331 19:14:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.332 19:14:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.332 19:14:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:30.332 19:14:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.332 19:14:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:30.332 "name": "raid_bdev1", 00:16:30.332 "uuid": "5080831b-eec3-46ad-bdc6-264a6f733420", 00:16:30.332 "strip_size_kb": 64, 00:16:30.332 "state": "online", 00:16:30.332 "raid_level": "raid5f", 00:16:30.332 "superblock": true, 00:16:30.332 "num_base_bdevs": 3, 00:16:30.332 "num_base_bdevs_discovered": 3, 00:16:30.332 "num_base_bdevs_operational": 3, 00:16:30.332 "base_bdevs_list": [ 00:16:30.332 { 00:16:30.332 "name": "BaseBdev1", 00:16:30.332 "uuid": "a42ab97a-acac-566a-9830-f2628b7a9931", 00:16:30.332 "is_configured": true, 00:16:30.332 "data_offset": 2048, 00:16:30.332 "data_size": 63488 00:16:30.332 }, 00:16:30.332 { 00:16:30.332 "name": "BaseBdev2", 00:16:30.332 "uuid": "c1d0c42a-13f4-5a68-ad3b-69525dc26023", 00:16:30.332 "is_configured": true, 00:16:30.332 "data_offset": 2048, 00:16:30.332 "data_size": 63488 00:16:30.332 }, 00:16:30.332 { 00:16:30.332 "name": "BaseBdev3", 00:16:30.332 "uuid": "16a69375-aa6e-5fc7-809a-dc2ba6bf746e", 00:16:30.332 "is_configured": true, 00:16:30.332 "data_offset": 2048, 00:16:30.332 "data_size": 63488 00:16:30.332 } 00:16:30.332 ] 00:16:30.332 }' 00:16:30.332 19:14:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:30.332 19:14:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.591 19:14:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:30.591 19:14:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.591 19:14:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.591 19:14:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:30.591 [2024-11-27 19:14:40.196917] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:30.591 19:14:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.850 19:14:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=126976 00:16:30.850 19:14:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:30.850 19:14:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:30.850 19:14:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.850 19:14:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.850 19:14:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.850 19:14:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:16:30.850 19:14:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:16:30.850 19:14:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:16:30.850 19:14:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:16:30.850 19:14:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:16:30.850 19:14:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:30.850 19:14:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:16:30.850 19:14:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:30.850 19:14:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:30.850 19:14:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:30.850 19:14:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:16:30.850 19:14:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:30.850 19:14:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:30.850 19:14:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:16:30.850 [2024-11-27 19:14:40.436360] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:16:30.850 /dev/nbd0 00:16:30.850 19:14:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:30.850 19:14:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:31.110 19:14:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:31.110 19:14:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:16:31.110 19:14:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:31.110 19:14:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:31.110 19:14:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:31.111 19:14:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:16:31.111 19:14:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:31.111 19:14:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:31.111 19:14:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:31.111 1+0 records in 00:16:31.111 1+0 records out 00:16:31.111 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000489463 s, 8.4 MB/s 00:16:31.111 19:14:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:31.111 19:14:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:16:31.111 19:14:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:31.111 19:14:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:31.111 19:14:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:16:31.111 19:14:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:31.111 19:14:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:31.111 19:14:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:16:31.111 19:14:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:16:31.111 19:14:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 128 00:16:31.111 19:14:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=496 oflag=direct 00:16:31.370 496+0 records in 00:16:31.370 496+0 records out 00:16:31.370 65011712 bytes (65 MB, 62 MiB) copied, 0.366075 s, 178 MB/s 00:16:31.370 19:14:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:31.370 19:14:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:31.370 19:14:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:31.370 19:14:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:31.370 19:14:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:16:31.370 19:14:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:31.370 19:14:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:31.629 19:14:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:31.629 [2024-11-27 19:14:41.093921] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:31.629 19:14:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:31.629 19:14:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:31.629 19:14:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:31.629 19:14:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:31.629 19:14:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:31.629 19:14:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:16:31.629 19:14:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:16:31.629 19:14:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:31.629 19:14:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.629 19:14:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.629 [2024-11-27 19:14:41.110201] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:31.629 19:14:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.629 19:14:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:31.629 19:14:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:31.629 19:14:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:31.629 19:14:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:31.629 19:14:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:31.629 19:14:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:31.629 19:14:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:31.629 19:14:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:31.629 19:14:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:31.629 19:14:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:31.629 19:14:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.629 19:14:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:31.629 19:14:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.629 19:14:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.629 19:14:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.629 19:14:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:31.629 "name": "raid_bdev1", 00:16:31.629 "uuid": "5080831b-eec3-46ad-bdc6-264a6f733420", 00:16:31.629 "strip_size_kb": 64, 00:16:31.629 "state": "online", 00:16:31.629 "raid_level": "raid5f", 00:16:31.629 "superblock": true, 00:16:31.629 "num_base_bdevs": 3, 00:16:31.629 "num_base_bdevs_discovered": 2, 00:16:31.629 "num_base_bdevs_operational": 2, 00:16:31.629 "base_bdevs_list": [ 00:16:31.629 { 00:16:31.629 "name": null, 00:16:31.629 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:31.629 "is_configured": false, 00:16:31.629 "data_offset": 0, 00:16:31.629 "data_size": 63488 00:16:31.629 }, 00:16:31.629 { 00:16:31.629 "name": "BaseBdev2", 00:16:31.629 "uuid": "c1d0c42a-13f4-5a68-ad3b-69525dc26023", 00:16:31.629 "is_configured": true, 00:16:31.629 "data_offset": 2048, 00:16:31.629 "data_size": 63488 00:16:31.629 }, 00:16:31.629 { 00:16:31.629 "name": "BaseBdev3", 00:16:31.629 "uuid": "16a69375-aa6e-5fc7-809a-dc2ba6bf746e", 00:16:31.629 "is_configured": true, 00:16:31.629 "data_offset": 2048, 00:16:31.629 "data_size": 63488 00:16:31.629 } 00:16:31.629 ] 00:16:31.629 }' 00:16:31.629 19:14:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:31.629 19:14:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.198 19:14:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:32.198 19:14:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.198 19:14:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.198 [2024-11-27 19:14:41.541486] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:32.198 [2024-11-27 19:14:41.558707] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000028f80 00:16:32.198 19:14:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.198 19:14:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:32.198 [2024-11-27 19:14:41.566352] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:33.137 19:14:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:33.137 19:14:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:33.137 19:14:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:33.137 19:14:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:33.137 19:14:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:33.137 19:14:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:33.137 19:14:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:33.137 19:14:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.137 19:14:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.137 19:14:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.137 19:14:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:33.137 "name": "raid_bdev1", 00:16:33.137 "uuid": "5080831b-eec3-46ad-bdc6-264a6f733420", 00:16:33.137 "strip_size_kb": 64, 00:16:33.137 "state": "online", 00:16:33.137 "raid_level": "raid5f", 00:16:33.137 "superblock": true, 00:16:33.137 "num_base_bdevs": 3, 00:16:33.137 "num_base_bdevs_discovered": 3, 00:16:33.137 "num_base_bdevs_operational": 3, 00:16:33.137 "process": { 00:16:33.137 "type": "rebuild", 00:16:33.137 "target": "spare", 00:16:33.137 "progress": { 00:16:33.137 "blocks": 20480, 00:16:33.137 "percent": 16 00:16:33.137 } 00:16:33.137 }, 00:16:33.137 "base_bdevs_list": [ 00:16:33.137 { 00:16:33.137 "name": "spare", 00:16:33.137 "uuid": "37c5cd70-36ae-5304-8016-e340d328342c", 00:16:33.137 "is_configured": true, 00:16:33.137 "data_offset": 2048, 00:16:33.137 "data_size": 63488 00:16:33.137 }, 00:16:33.137 { 00:16:33.137 "name": "BaseBdev2", 00:16:33.137 "uuid": "c1d0c42a-13f4-5a68-ad3b-69525dc26023", 00:16:33.137 "is_configured": true, 00:16:33.137 "data_offset": 2048, 00:16:33.137 "data_size": 63488 00:16:33.137 }, 00:16:33.137 { 00:16:33.137 "name": "BaseBdev3", 00:16:33.137 "uuid": "16a69375-aa6e-5fc7-809a-dc2ba6bf746e", 00:16:33.137 "is_configured": true, 00:16:33.137 "data_offset": 2048, 00:16:33.137 "data_size": 63488 00:16:33.137 } 00:16:33.137 ] 00:16:33.137 }' 00:16:33.137 19:14:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:33.137 19:14:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:33.137 19:14:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:33.137 19:14:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:33.137 19:14:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:33.137 19:14:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.137 19:14:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.137 [2024-11-27 19:14:42.729022] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:33.398 [2024-11-27 19:14:42.775884] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:33.398 [2024-11-27 19:14:42.775944] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:33.398 [2024-11-27 19:14:42.775965] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:33.398 [2024-11-27 19:14:42.775974] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:33.398 19:14:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.398 19:14:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:33.398 19:14:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:33.398 19:14:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:33.398 19:14:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:33.398 19:14:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:33.398 19:14:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:33.398 19:14:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:33.398 19:14:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:33.398 19:14:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:33.398 19:14:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:33.398 19:14:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:33.398 19:14:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:33.398 19:14:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.398 19:14:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.398 19:14:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.398 19:14:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:33.398 "name": "raid_bdev1", 00:16:33.398 "uuid": "5080831b-eec3-46ad-bdc6-264a6f733420", 00:16:33.398 "strip_size_kb": 64, 00:16:33.398 "state": "online", 00:16:33.398 "raid_level": "raid5f", 00:16:33.398 "superblock": true, 00:16:33.398 "num_base_bdevs": 3, 00:16:33.398 "num_base_bdevs_discovered": 2, 00:16:33.398 "num_base_bdevs_operational": 2, 00:16:33.398 "base_bdevs_list": [ 00:16:33.398 { 00:16:33.398 "name": null, 00:16:33.398 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:33.398 "is_configured": false, 00:16:33.398 "data_offset": 0, 00:16:33.398 "data_size": 63488 00:16:33.398 }, 00:16:33.398 { 00:16:33.398 "name": "BaseBdev2", 00:16:33.398 "uuid": "c1d0c42a-13f4-5a68-ad3b-69525dc26023", 00:16:33.398 "is_configured": true, 00:16:33.398 "data_offset": 2048, 00:16:33.398 "data_size": 63488 00:16:33.398 }, 00:16:33.398 { 00:16:33.398 "name": "BaseBdev3", 00:16:33.398 "uuid": "16a69375-aa6e-5fc7-809a-dc2ba6bf746e", 00:16:33.398 "is_configured": true, 00:16:33.398 "data_offset": 2048, 00:16:33.398 "data_size": 63488 00:16:33.398 } 00:16:33.398 ] 00:16:33.398 }' 00:16:33.398 19:14:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:33.398 19:14:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.659 19:14:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:33.659 19:14:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:33.659 19:14:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:33.659 19:14:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:33.659 19:14:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:33.659 19:14:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:33.659 19:14:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.659 19:14:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.659 19:14:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:33.659 19:14:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.659 19:14:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:33.659 "name": "raid_bdev1", 00:16:33.659 "uuid": "5080831b-eec3-46ad-bdc6-264a6f733420", 00:16:33.659 "strip_size_kb": 64, 00:16:33.659 "state": "online", 00:16:33.659 "raid_level": "raid5f", 00:16:33.659 "superblock": true, 00:16:33.659 "num_base_bdevs": 3, 00:16:33.659 "num_base_bdevs_discovered": 2, 00:16:33.659 "num_base_bdevs_operational": 2, 00:16:33.659 "base_bdevs_list": [ 00:16:33.659 { 00:16:33.659 "name": null, 00:16:33.659 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:33.659 "is_configured": false, 00:16:33.659 "data_offset": 0, 00:16:33.659 "data_size": 63488 00:16:33.659 }, 00:16:33.659 { 00:16:33.659 "name": "BaseBdev2", 00:16:33.659 "uuid": "c1d0c42a-13f4-5a68-ad3b-69525dc26023", 00:16:33.659 "is_configured": true, 00:16:33.659 "data_offset": 2048, 00:16:33.659 "data_size": 63488 00:16:33.659 }, 00:16:33.659 { 00:16:33.659 "name": "BaseBdev3", 00:16:33.659 "uuid": "16a69375-aa6e-5fc7-809a-dc2ba6bf746e", 00:16:33.659 "is_configured": true, 00:16:33.659 "data_offset": 2048, 00:16:33.659 "data_size": 63488 00:16:33.659 } 00:16:33.659 ] 00:16:33.659 }' 00:16:33.659 19:14:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:33.919 19:14:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:33.919 19:14:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:33.919 19:14:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:33.919 19:14:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:33.919 19:14:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.919 19:14:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.919 [2024-11-27 19:14:43.395925] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:33.919 [2024-11-27 19:14:43.412030] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000029050 00:16:33.919 19:14:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.919 19:14:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:33.919 [2024-11-27 19:14:43.419895] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:35.007 19:14:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:35.007 19:14:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:35.007 19:14:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:35.007 19:14:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:35.007 19:14:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:35.007 19:14:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:35.007 19:14:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.007 19:14:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:35.007 19:14:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:35.007 19:14:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.007 19:14:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:35.007 "name": "raid_bdev1", 00:16:35.007 "uuid": "5080831b-eec3-46ad-bdc6-264a6f733420", 00:16:35.007 "strip_size_kb": 64, 00:16:35.007 "state": "online", 00:16:35.007 "raid_level": "raid5f", 00:16:35.007 "superblock": true, 00:16:35.007 "num_base_bdevs": 3, 00:16:35.007 "num_base_bdevs_discovered": 3, 00:16:35.007 "num_base_bdevs_operational": 3, 00:16:35.007 "process": { 00:16:35.007 "type": "rebuild", 00:16:35.007 "target": "spare", 00:16:35.007 "progress": { 00:16:35.007 "blocks": 20480, 00:16:35.007 "percent": 16 00:16:35.007 } 00:16:35.007 }, 00:16:35.007 "base_bdevs_list": [ 00:16:35.007 { 00:16:35.007 "name": "spare", 00:16:35.007 "uuid": "37c5cd70-36ae-5304-8016-e340d328342c", 00:16:35.007 "is_configured": true, 00:16:35.007 "data_offset": 2048, 00:16:35.007 "data_size": 63488 00:16:35.007 }, 00:16:35.007 { 00:16:35.007 "name": "BaseBdev2", 00:16:35.007 "uuid": "c1d0c42a-13f4-5a68-ad3b-69525dc26023", 00:16:35.007 "is_configured": true, 00:16:35.007 "data_offset": 2048, 00:16:35.007 "data_size": 63488 00:16:35.007 }, 00:16:35.007 { 00:16:35.007 "name": "BaseBdev3", 00:16:35.007 "uuid": "16a69375-aa6e-5fc7-809a-dc2ba6bf746e", 00:16:35.007 "is_configured": true, 00:16:35.007 "data_offset": 2048, 00:16:35.007 "data_size": 63488 00:16:35.007 } 00:16:35.007 ] 00:16:35.007 }' 00:16:35.007 19:14:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:35.007 19:14:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:35.007 19:14:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:35.007 19:14:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:35.007 19:14:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:16:35.008 19:14:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:16:35.008 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:16:35.008 19:14:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:16:35.008 19:14:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:16:35.008 19:14:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=566 00:16:35.008 19:14:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:35.008 19:14:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:35.008 19:14:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:35.008 19:14:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:35.008 19:14:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:35.008 19:14:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:35.008 19:14:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:35.008 19:14:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:35.008 19:14:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.008 19:14:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:35.008 19:14:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.008 19:14:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:35.008 "name": "raid_bdev1", 00:16:35.008 "uuid": "5080831b-eec3-46ad-bdc6-264a6f733420", 00:16:35.008 "strip_size_kb": 64, 00:16:35.008 "state": "online", 00:16:35.008 "raid_level": "raid5f", 00:16:35.008 "superblock": true, 00:16:35.008 "num_base_bdevs": 3, 00:16:35.008 "num_base_bdevs_discovered": 3, 00:16:35.008 "num_base_bdevs_operational": 3, 00:16:35.008 "process": { 00:16:35.008 "type": "rebuild", 00:16:35.008 "target": "spare", 00:16:35.008 "progress": { 00:16:35.008 "blocks": 22528, 00:16:35.008 "percent": 17 00:16:35.008 } 00:16:35.008 }, 00:16:35.008 "base_bdevs_list": [ 00:16:35.008 { 00:16:35.008 "name": "spare", 00:16:35.008 "uuid": "37c5cd70-36ae-5304-8016-e340d328342c", 00:16:35.008 "is_configured": true, 00:16:35.008 "data_offset": 2048, 00:16:35.008 "data_size": 63488 00:16:35.008 }, 00:16:35.008 { 00:16:35.008 "name": "BaseBdev2", 00:16:35.008 "uuid": "c1d0c42a-13f4-5a68-ad3b-69525dc26023", 00:16:35.008 "is_configured": true, 00:16:35.008 "data_offset": 2048, 00:16:35.008 "data_size": 63488 00:16:35.008 }, 00:16:35.008 { 00:16:35.008 "name": "BaseBdev3", 00:16:35.008 "uuid": "16a69375-aa6e-5fc7-809a-dc2ba6bf746e", 00:16:35.008 "is_configured": true, 00:16:35.008 "data_offset": 2048, 00:16:35.008 "data_size": 63488 00:16:35.008 } 00:16:35.008 ] 00:16:35.008 }' 00:16:35.282 19:14:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:35.282 19:14:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:35.282 19:14:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:35.282 19:14:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:35.282 19:14:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:36.220 19:14:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:36.220 19:14:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:36.220 19:14:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:36.220 19:14:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:36.220 19:14:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:36.220 19:14:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:36.220 19:14:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:36.220 19:14:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:36.220 19:14:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.220 19:14:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.220 19:14:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.220 19:14:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:36.220 "name": "raid_bdev1", 00:16:36.220 "uuid": "5080831b-eec3-46ad-bdc6-264a6f733420", 00:16:36.220 "strip_size_kb": 64, 00:16:36.220 "state": "online", 00:16:36.220 "raid_level": "raid5f", 00:16:36.220 "superblock": true, 00:16:36.220 "num_base_bdevs": 3, 00:16:36.220 "num_base_bdevs_discovered": 3, 00:16:36.220 "num_base_bdevs_operational": 3, 00:16:36.220 "process": { 00:16:36.220 "type": "rebuild", 00:16:36.220 "target": "spare", 00:16:36.220 "progress": { 00:16:36.220 "blocks": 45056, 00:16:36.220 "percent": 35 00:16:36.220 } 00:16:36.220 }, 00:16:36.220 "base_bdevs_list": [ 00:16:36.220 { 00:16:36.220 "name": "spare", 00:16:36.220 "uuid": "37c5cd70-36ae-5304-8016-e340d328342c", 00:16:36.220 "is_configured": true, 00:16:36.220 "data_offset": 2048, 00:16:36.220 "data_size": 63488 00:16:36.220 }, 00:16:36.220 { 00:16:36.220 "name": "BaseBdev2", 00:16:36.220 "uuid": "c1d0c42a-13f4-5a68-ad3b-69525dc26023", 00:16:36.220 "is_configured": true, 00:16:36.220 "data_offset": 2048, 00:16:36.220 "data_size": 63488 00:16:36.220 }, 00:16:36.220 { 00:16:36.220 "name": "BaseBdev3", 00:16:36.220 "uuid": "16a69375-aa6e-5fc7-809a-dc2ba6bf746e", 00:16:36.220 "is_configured": true, 00:16:36.220 "data_offset": 2048, 00:16:36.220 "data_size": 63488 00:16:36.220 } 00:16:36.220 ] 00:16:36.220 }' 00:16:36.220 19:14:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:36.220 19:14:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:36.220 19:14:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:36.480 19:14:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:36.480 19:14:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:37.422 19:14:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:37.422 19:14:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:37.422 19:14:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:37.422 19:14:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:37.422 19:14:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:37.422 19:14:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:37.422 19:14:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:37.422 19:14:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.422 19:14:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:37.422 19:14:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.422 19:14:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.422 19:14:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:37.422 "name": "raid_bdev1", 00:16:37.422 "uuid": "5080831b-eec3-46ad-bdc6-264a6f733420", 00:16:37.422 "strip_size_kb": 64, 00:16:37.422 "state": "online", 00:16:37.422 "raid_level": "raid5f", 00:16:37.422 "superblock": true, 00:16:37.422 "num_base_bdevs": 3, 00:16:37.422 "num_base_bdevs_discovered": 3, 00:16:37.422 "num_base_bdevs_operational": 3, 00:16:37.422 "process": { 00:16:37.422 "type": "rebuild", 00:16:37.422 "target": "spare", 00:16:37.422 "progress": { 00:16:37.422 "blocks": 69632, 00:16:37.422 "percent": 54 00:16:37.422 } 00:16:37.422 }, 00:16:37.422 "base_bdevs_list": [ 00:16:37.422 { 00:16:37.422 "name": "spare", 00:16:37.422 "uuid": "37c5cd70-36ae-5304-8016-e340d328342c", 00:16:37.422 "is_configured": true, 00:16:37.422 "data_offset": 2048, 00:16:37.422 "data_size": 63488 00:16:37.422 }, 00:16:37.422 { 00:16:37.422 "name": "BaseBdev2", 00:16:37.422 "uuid": "c1d0c42a-13f4-5a68-ad3b-69525dc26023", 00:16:37.422 "is_configured": true, 00:16:37.422 "data_offset": 2048, 00:16:37.422 "data_size": 63488 00:16:37.422 }, 00:16:37.422 { 00:16:37.422 "name": "BaseBdev3", 00:16:37.422 "uuid": "16a69375-aa6e-5fc7-809a-dc2ba6bf746e", 00:16:37.422 "is_configured": true, 00:16:37.422 "data_offset": 2048, 00:16:37.422 "data_size": 63488 00:16:37.422 } 00:16:37.422 ] 00:16:37.422 }' 00:16:37.422 19:14:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:37.422 19:14:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:37.422 19:14:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:37.422 19:14:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:37.422 19:14:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:38.803 19:14:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:38.803 19:14:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:38.803 19:14:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:38.803 19:14:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:38.803 19:14:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:38.803 19:14:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:38.803 19:14:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:38.803 19:14:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:38.803 19:14:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.803 19:14:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:38.803 19:14:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.803 19:14:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:38.803 "name": "raid_bdev1", 00:16:38.803 "uuid": "5080831b-eec3-46ad-bdc6-264a6f733420", 00:16:38.803 "strip_size_kb": 64, 00:16:38.803 "state": "online", 00:16:38.803 "raid_level": "raid5f", 00:16:38.803 "superblock": true, 00:16:38.803 "num_base_bdevs": 3, 00:16:38.803 "num_base_bdevs_discovered": 3, 00:16:38.803 "num_base_bdevs_operational": 3, 00:16:38.803 "process": { 00:16:38.803 "type": "rebuild", 00:16:38.803 "target": "spare", 00:16:38.803 "progress": { 00:16:38.803 "blocks": 92160, 00:16:38.803 "percent": 72 00:16:38.803 } 00:16:38.803 }, 00:16:38.803 "base_bdevs_list": [ 00:16:38.803 { 00:16:38.803 "name": "spare", 00:16:38.803 "uuid": "37c5cd70-36ae-5304-8016-e340d328342c", 00:16:38.803 "is_configured": true, 00:16:38.803 "data_offset": 2048, 00:16:38.803 "data_size": 63488 00:16:38.803 }, 00:16:38.803 { 00:16:38.803 "name": "BaseBdev2", 00:16:38.803 "uuid": "c1d0c42a-13f4-5a68-ad3b-69525dc26023", 00:16:38.803 "is_configured": true, 00:16:38.803 "data_offset": 2048, 00:16:38.803 "data_size": 63488 00:16:38.803 }, 00:16:38.803 { 00:16:38.803 "name": "BaseBdev3", 00:16:38.803 "uuid": "16a69375-aa6e-5fc7-809a-dc2ba6bf746e", 00:16:38.803 "is_configured": true, 00:16:38.803 "data_offset": 2048, 00:16:38.803 "data_size": 63488 00:16:38.803 } 00:16:38.803 ] 00:16:38.803 }' 00:16:38.803 19:14:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:38.803 19:14:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:38.803 19:14:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:38.803 19:14:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:38.803 19:14:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:39.744 19:14:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:39.744 19:14:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:39.744 19:14:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:39.744 19:14:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:39.744 19:14:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:39.744 19:14:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:39.744 19:14:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:39.744 19:14:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:39.744 19:14:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.744 19:14:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:39.745 19:14:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.745 19:14:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:39.745 "name": "raid_bdev1", 00:16:39.745 "uuid": "5080831b-eec3-46ad-bdc6-264a6f733420", 00:16:39.745 "strip_size_kb": 64, 00:16:39.745 "state": "online", 00:16:39.745 "raid_level": "raid5f", 00:16:39.745 "superblock": true, 00:16:39.745 "num_base_bdevs": 3, 00:16:39.745 "num_base_bdevs_discovered": 3, 00:16:39.745 "num_base_bdevs_operational": 3, 00:16:39.745 "process": { 00:16:39.745 "type": "rebuild", 00:16:39.745 "target": "spare", 00:16:39.745 "progress": { 00:16:39.745 "blocks": 116736, 00:16:39.745 "percent": 91 00:16:39.745 } 00:16:39.745 }, 00:16:39.745 "base_bdevs_list": [ 00:16:39.745 { 00:16:39.745 "name": "spare", 00:16:39.745 "uuid": "37c5cd70-36ae-5304-8016-e340d328342c", 00:16:39.745 "is_configured": true, 00:16:39.745 "data_offset": 2048, 00:16:39.745 "data_size": 63488 00:16:39.745 }, 00:16:39.745 { 00:16:39.745 "name": "BaseBdev2", 00:16:39.745 "uuid": "c1d0c42a-13f4-5a68-ad3b-69525dc26023", 00:16:39.745 "is_configured": true, 00:16:39.745 "data_offset": 2048, 00:16:39.745 "data_size": 63488 00:16:39.745 }, 00:16:39.745 { 00:16:39.745 "name": "BaseBdev3", 00:16:39.745 "uuid": "16a69375-aa6e-5fc7-809a-dc2ba6bf746e", 00:16:39.745 "is_configured": true, 00:16:39.745 "data_offset": 2048, 00:16:39.745 "data_size": 63488 00:16:39.745 } 00:16:39.745 ] 00:16:39.745 }' 00:16:39.745 19:14:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:39.745 19:14:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:39.745 19:14:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:39.745 19:14:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:39.745 19:14:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:40.315 [2024-11-27 19:14:49.668426] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:40.315 [2024-11-27 19:14:49.668561] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:40.315 [2024-11-27 19:14:49.668728] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:40.885 19:14:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:40.885 19:14:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:40.885 19:14:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:40.885 19:14:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:40.885 19:14:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:40.885 19:14:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:40.885 19:14:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:40.885 19:14:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:40.885 19:14:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.885 19:14:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.885 19:14:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.885 19:14:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:40.885 "name": "raid_bdev1", 00:16:40.885 "uuid": "5080831b-eec3-46ad-bdc6-264a6f733420", 00:16:40.885 "strip_size_kb": 64, 00:16:40.885 "state": "online", 00:16:40.885 "raid_level": "raid5f", 00:16:40.885 "superblock": true, 00:16:40.885 "num_base_bdevs": 3, 00:16:40.885 "num_base_bdevs_discovered": 3, 00:16:40.885 "num_base_bdevs_operational": 3, 00:16:40.885 "base_bdevs_list": [ 00:16:40.885 { 00:16:40.885 "name": "spare", 00:16:40.885 "uuid": "37c5cd70-36ae-5304-8016-e340d328342c", 00:16:40.885 "is_configured": true, 00:16:40.885 "data_offset": 2048, 00:16:40.885 "data_size": 63488 00:16:40.885 }, 00:16:40.885 { 00:16:40.885 "name": "BaseBdev2", 00:16:40.885 "uuid": "c1d0c42a-13f4-5a68-ad3b-69525dc26023", 00:16:40.885 "is_configured": true, 00:16:40.885 "data_offset": 2048, 00:16:40.885 "data_size": 63488 00:16:40.885 }, 00:16:40.885 { 00:16:40.885 "name": "BaseBdev3", 00:16:40.885 "uuid": "16a69375-aa6e-5fc7-809a-dc2ba6bf746e", 00:16:40.885 "is_configured": true, 00:16:40.885 "data_offset": 2048, 00:16:40.885 "data_size": 63488 00:16:40.885 } 00:16:40.885 ] 00:16:40.885 }' 00:16:40.885 19:14:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:40.885 19:14:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:40.885 19:14:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:40.885 19:14:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:40.885 19:14:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:16:40.885 19:14:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:40.885 19:14:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:40.885 19:14:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:40.885 19:14:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:40.885 19:14:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:40.885 19:14:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:40.885 19:14:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:40.885 19:14:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.885 19:14:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.885 19:14:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.885 19:14:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:40.885 "name": "raid_bdev1", 00:16:40.885 "uuid": "5080831b-eec3-46ad-bdc6-264a6f733420", 00:16:40.885 "strip_size_kb": 64, 00:16:40.885 "state": "online", 00:16:40.885 "raid_level": "raid5f", 00:16:40.885 "superblock": true, 00:16:40.885 "num_base_bdevs": 3, 00:16:40.885 "num_base_bdevs_discovered": 3, 00:16:40.885 "num_base_bdevs_operational": 3, 00:16:40.885 "base_bdevs_list": [ 00:16:40.886 { 00:16:40.886 "name": "spare", 00:16:40.886 "uuid": "37c5cd70-36ae-5304-8016-e340d328342c", 00:16:40.886 "is_configured": true, 00:16:40.886 "data_offset": 2048, 00:16:40.886 "data_size": 63488 00:16:40.886 }, 00:16:40.886 { 00:16:40.886 "name": "BaseBdev2", 00:16:40.886 "uuid": "c1d0c42a-13f4-5a68-ad3b-69525dc26023", 00:16:40.886 "is_configured": true, 00:16:40.886 "data_offset": 2048, 00:16:40.886 "data_size": 63488 00:16:40.886 }, 00:16:40.886 { 00:16:40.886 "name": "BaseBdev3", 00:16:40.886 "uuid": "16a69375-aa6e-5fc7-809a-dc2ba6bf746e", 00:16:40.886 "is_configured": true, 00:16:40.886 "data_offset": 2048, 00:16:40.886 "data_size": 63488 00:16:40.886 } 00:16:40.886 ] 00:16:40.886 }' 00:16:40.886 19:14:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:41.146 19:14:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:41.146 19:14:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:41.146 19:14:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:41.146 19:14:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:41.146 19:14:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:41.146 19:14:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:41.146 19:14:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:41.146 19:14:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:41.146 19:14:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:41.146 19:14:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:41.146 19:14:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:41.146 19:14:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:41.146 19:14:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:41.146 19:14:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:41.146 19:14:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:41.146 19:14:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.146 19:14:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:41.146 19:14:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.146 19:14:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:41.146 "name": "raid_bdev1", 00:16:41.146 "uuid": "5080831b-eec3-46ad-bdc6-264a6f733420", 00:16:41.146 "strip_size_kb": 64, 00:16:41.146 "state": "online", 00:16:41.146 "raid_level": "raid5f", 00:16:41.146 "superblock": true, 00:16:41.146 "num_base_bdevs": 3, 00:16:41.146 "num_base_bdevs_discovered": 3, 00:16:41.146 "num_base_bdevs_operational": 3, 00:16:41.146 "base_bdevs_list": [ 00:16:41.146 { 00:16:41.146 "name": "spare", 00:16:41.146 "uuid": "37c5cd70-36ae-5304-8016-e340d328342c", 00:16:41.146 "is_configured": true, 00:16:41.146 "data_offset": 2048, 00:16:41.146 "data_size": 63488 00:16:41.146 }, 00:16:41.146 { 00:16:41.146 "name": "BaseBdev2", 00:16:41.146 "uuid": "c1d0c42a-13f4-5a68-ad3b-69525dc26023", 00:16:41.146 "is_configured": true, 00:16:41.146 "data_offset": 2048, 00:16:41.146 "data_size": 63488 00:16:41.146 }, 00:16:41.146 { 00:16:41.146 "name": "BaseBdev3", 00:16:41.146 "uuid": "16a69375-aa6e-5fc7-809a-dc2ba6bf746e", 00:16:41.146 "is_configured": true, 00:16:41.146 "data_offset": 2048, 00:16:41.146 "data_size": 63488 00:16:41.146 } 00:16:41.146 ] 00:16:41.146 }' 00:16:41.146 19:14:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:41.146 19:14:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:41.406 19:14:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:41.406 19:14:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.406 19:14:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:41.406 [2024-11-27 19:14:51.001510] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:41.406 [2024-11-27 19:14:51.001597] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:41.406 [2024-11-27 19:14:51.001741] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:41.406 [2024-11-27 19:14:51.001865] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:41.406 [2024-11-27 19:14:51.001920] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:41.406 19:14:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.406 19:14:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:16:41.406 19:14:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:41.406 19:14:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.406 19:14:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:41.406 19:14:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.666 19:14:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:41.666 19:14:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:41.666 19:14:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:16:41.666 19:14:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:16:41.666 19:14:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:41.666 19:14:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:16:41.666 19:14:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:41.666 19:14:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:41.666 19:14:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:41.666 19:14:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:16:41.666 19:14:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:41.666 19:14:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:41.666 19:14:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:16:41.666 /dev/nbd0 00:16:41.666 19:14:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:41.666 19:14:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:41.666 19:14:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:41.666 19:14:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:16:41.666 19:14:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:41.666 19:14:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:41.666 19:14:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:41.666 19:14:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:16:41.666 19:14:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:41.667 19:14:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:41.667 19:14:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:41.667 1+0 records in 00:16:41.667 1+0 records out 00:16:41.667 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000527216 s, 7.8 MB/s 00:16:41.667 19:14:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:41.667 19:14:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:16:41.667 19:14:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:41.667 19:14:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:41.667 19:14:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:16:41.667 19:14:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:41.667 19:14:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:41.667 19:14:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:16:41.926 /dev/nbd1 00:16:41.926 19:14:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:42.187 19:14:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:42.187 19:14:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:16:42.187 19:14:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:16:42.187 19:14:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:42.187 19:14:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:42.187 19:14:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:16:42.187 19:14:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:16:42.187 19:14:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:42.187 19:14:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:42.187 19:14:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:42.187 1+0 records in 00:16:42.187 1+0 records out 00:16:42.187 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000310482 s, 13.2 MB/s 00:16:42.187 19:14:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:42.187 19:14:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:16:42.187 19:14:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:42.187 19:14:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:42.187 19:14:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:16:42.187 19:14:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:42.187 19:14:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:42.187 19:14:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:16:42.187 19:14:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:16:42.187 19:14:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:42.187 19:14:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:42.187 19:14:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:42.187 19:14:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:16:42.187 19:14:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:42.187 19:14:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:42.447 19:14:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:42.447 19:14:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:42.447 19:14:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:42.447 19:14:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:42.447 19:14:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:42.447 19:14:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:42.447 19:14:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:16:42.447 19:14:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:16:42.447 19:14:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:42.447 19:14:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:42.707 19:14:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:42.707 19:14:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:42.707 19:14:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:42.707 19:14:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:42.707 19:14:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:42.707 19:14:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:42.707 19:14:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:16:42.707 19:14:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:16:42.707 19:14:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:16:42.707 19:14:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:16:42.707 19:14:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.707 19:14:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:42.707 19:14:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.707 19:14:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:42.707 19:14:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.707 19:14:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:42.707 [2024-11-27 19:14:52.160439] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:42.707 [2024-11-27 19:14:52.160504] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:42.707 [2024-11-27 19:14:52.160529] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:16:42.707 [2024-11-27 19:14:52.160541] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:42.707 [2024-11-27 19:14:52.163138] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:42.707 [2024-11-27 19:14:52.163177] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:42.707 [2024-11-27 19:14:52.163260] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:42.707 [2024-11-27 19:14:52.163319] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:42.707 [2024-11-27 19:14:52.163459] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:42.707 [2024-11-27 19:14:52.163556] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:42.707 spare 00:16:42.707 19:14:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.707 19:14:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:16:42.707 19:14:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.707 19:14:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:42.708 [2024-11-27 19:14:52.263479] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:16:42.708 [2024-11-27 19:14:52.263558] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:16:42.708 [2024-11-27 19:14:52.263894] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000047700 00:16:42.708 [2024-11-27 19:14:52.269294] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:16:42.708 [2024-11-27 19:14:52.269350] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:16:42.708 [2024-11-27 19:14:52.269591] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:42.708 19:14:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.708 19:14:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:42.708 19:14:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:42.708 19:14:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:42.708 19:14:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:42.708 19:14:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:42.708 19:14:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:42.708 19:14:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:42.708 19:14:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:42.708 19:14:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:42.708 19:14:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:42.708 19:14:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:42.708 19:14:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:42.708 19:14:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.708 19:14:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:42.708 19:14:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.708 19:14:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:42.708 "name": "raid_bdev1", 00:16:42.708 "uuid": "5080831b-eec3-46ad-bdc6-264a6f733420", 00:16:42.708 "strip_size_kb": 64, 00:16:42.708 "state": "online", 00:16:42.708 "raid_level": "raid5f", 00:16:42.708 "superblock": true, 00:16:42.708 "num_base_bdevs": 3, 00:16:42.708 "num_base_bdevs_discovered": 3, 00:16:42.708 "num_base_bdevs_operational": 3, 00:16:42.708 "base_bdevs_list": [ 00:16:42.708 { 00:16:42.708 "name": "spare", 00:16:42.708 "uuid": "37c5cd70-36ae-5304-8016-e340d328342c", 00:16:42.708 "is_configured": true, 00:16:42.708 "data_offset": 2048, 00:16:42.708 "data_size": 63488 00:16:42.708 }, 00:16:42.708 { 00:16:42.708 "name": "BaseBdev2", 00:16:42.708 "uuid": "c1d0c42a-13f4-5a68-ad3b-69525dc26023", 00:16:42.708 "is_configured": true, 00:16:42.708 "data_offset": 2048, 00:16:42.708 "data_size": 63488 00:16:42.708 }, 00:16:42.708 { 00:16:42.708 "name": "BaseBdev3", 00:16:42.708 "uuid": "16a69375-aa6e-5fc7-809a-dc2ba6bf746e", 00:16:42.708 "is_configured": true, 00:16:42.708 "data_offset": 2048, 00:16:42.708 "data_size": 63488 00:16:42.708 } 00:16:42.708 ] 00:16:42.708 }' 00:16:42.708 19:14:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:42.708 19:14:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:43.278 19:14:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:43.278 19:14:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:43.278 19:14:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:43.278 19:14:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:43.278 19:14:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:43.278 19:14:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:43.278 19:14:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.278 19:14:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:43.278 19:14:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:43.279 19:14:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.279 19:14:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:43.279 "name": "raid_bdev1", 00:16:43.279 "uuid": "5080831b-eec3-46ad-bdc6-264a6f733420", 00:16:43.279 "strip_size_kb": 64, 00:16:43.279 "state": "online", 00:16:43.279 "raid_level": "raid5f", 00:16:43.279 "superblock": true, 00:16:43.279 "num_base_bdevs": 3, 00:16:43.279 "num_base_bdevs_discovered": 3, 00:16:43.279 "num_base_bdevs_operational": 3, 00:16:43.279 "base_bdevs_list": [ 00:16:43.279 { 00:16:43.279 "name": "spare", 00:16:43.279 "uuid": "37c5cd70-36ae-5304-8016-e340d328342c", 00:16:43.279 "is_configured": true, 00:16:43.279 "data_offset": 2048, 00:16:43.279 "data_size": 63488 00:16:43.279 }, 00:16:43.279 { 00:16:43.279 "name": "BaseBdev2", 00:16:43.279 "uuid": "c1d0c42a-13f4-5a68-ad3b-69525dc26023", 00:16:43.279 "is_configured": true, 00:16:43.279 "data_offset": 2048, 00:16:43.279 "data_size": 63488 00:16:43.279 }, 00:16:43.279 { 00:16:43.279 "name": "BaseBdev3", 00:16:43.279 "uuid": "16a69375-aa6e-5fc7-809a-dc2ba6bf746e", 00:16:43.279 "is_configured": true, 00:16:43.279 "data_offset": 2048, 00:16:43.279 "data_size": 63488 00:16:43.279 } 00:16:43.279 ] 00:16:43.279 }' 00:16:43.279 19:14:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:43.279 19:14:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:43.279 19:14:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:43.279 19:14:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:43.279 19:14:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:43.279 19:14:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.279 19:14:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:43.279 19:14:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:16:43.279 19:14:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.279 19:14:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:16:43.279 19:14:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:43.279 19:14:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.279 19:14:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:43.279 [2024-11-27 19:14:52.887567] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:43.279 19:14:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.279 19:14:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:43.279 19:14:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:43.279 19:14:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:43.279 19:14:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:43.279 19:14:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:43.279 19:14:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:43.279 19:14:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:43.279 19:14:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:43.279 19:14:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:43.279 19:14:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:43.279 19:14:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:43.279 19:14:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:43.279 19:14:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.279 19:14:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:43.539 19:14:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.539 19:14:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:43.539 "name": "raid_bdev1", 00:16:43.539 "uuid": "5080831b-eec3-46ad-bdc6-264a6f733420", 00:16:43.539 "strip_size_kb": 64, 00:16:43.539 "state": "online", 00:16:43.539 "raid_level": "raid5f", 00:16:43.539 "superblock": true, 00:16:43.539 "num_base_bdevs": 3, 00:16:43.539 "num_base_bdevs_discovered": 2, 00:16:43.539 "num_base_bdevs_operational": 2, 00:16:43.539 "base_bdevs_list": [ 00:16:43.539 { 00:16:43.539 "name": null, 00:16:43.539 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:43.539 "is_configured": false, 00:16:43.539 "data_offset": 0, 00:16:43.539 "data_size": 63488 00:16:43.539 }, 00:16:43.539 { 00:16:43.540 "name": "BaseBdev2", 00:16:43.540 "uuid": "c1d0c42a-13f4-5a68-ad3b-69525dc26023", 00:16:43.540 "is_configured": true, 00:16:43.540 "data_offset": 2048, 00:16:43.540 "data_size": 63488 00:16:43.540 }, 00:16:43.540 { 00:16:43.540 "name": "BaseBdev3", 00:16:43.540 "uuid": "16a69375-aa6e-5fc7-809a-dc2ba6bf746e", 00:16:43.540 "is_configured": true, 00:16:43.540 "data_offset": 2048, 00:16:43.540 "data_size": 63488 00:16:43.540 } 00:16:43.540 ] 00:16:43.540 }' 00:16:43.540 19:14:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:43.540 19:14:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:43.800 19:14:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:43.800 19:14:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.800 19:14:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:43.800 [2024-11-27 19:14:53.314846] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:43.800 [2024-11-27 19:14:53.315020] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:43.800 [2024-11-27 19:14:53.315038] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:43.800 [2024-11-27 19:14:53.315083] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:43.800 [2024-11-27 19:14:53.331201] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000477d0 00:16:43.800 19:14:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.800 19:14:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:16:43.800 [2024-11-27 19:14:53.338572] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:44.740 19:14:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:44.740 19:14:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:44.740 19:14:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:44.740 19:14:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:44.740 19:14:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:44.740 19:14:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:44.740 19:14:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.740 19:14:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:44.740 19:14:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:44.740 19:14:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.000 19:14:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:45.000 "name": "raid_bdev1", 00:16:45.000 "uuid": "5080831b-eec3-46ad-bdc6-264a6f733420", 00:16:45.000 "strip_size_kb": 64, 00:16:45.000 "state": "online", 00:16:45.000 "raid_level": "raid5f", 00:16:45.000 "superblock": true, 00:16:45.000 "num_base_bdevs": 3, 00:16:45.000 "num_base_bdevs_discovered": 3, 00:16:45.000 "num_base_bdevs_operational": 3, 00:16:45.000 "process": { 00:16:45.001 "type": "rebuild", 00:16:45.001 "target": "spare", 00:16:45.001 "progress": { 00:16:45.001 "blocks": 20480, 00:16:45.001 "percent": 16 00:16:45.001 } 00:16:45.001 }, 00:16:45.001 "base_bdevs_list": [ 00:16:45.001 { 00:16:45.001 "name": "spare", 00:16:45.001 "uuid": "37c5cd70-36ae-5304-8016-e340d328342c", 00:16:45.001 "is_configured": true, 00:16:45.001 "data_offset": 2048, 00:16:45.001 "data_size": 63488 00:16:45.001 }, 00:16:45.001 { 00:16:45.001 "name": "BaseBdev2", 00:16:45.001 "uuid": "c1d0c42a-13f4-5a68-ad3b-69525dc26023", 00:16:45.001 "is_configured": true, 00:16:45.001 "data_offset": 2048, 00:16:45.001 "data_size": 63488 00:16:45.001 }, 00:16:45.001 { 00:16:45.001 "name": "BaseBdev3", 00:16:45.001 "uuid": "16a69375-aa6e-5fc7-809a-dc2ba6bf746e", 00:16:45.001 "is_configured": true, 00:16:45.001 "data_offset": 2048, 00:16:45.001 "data_size": 63488 00:16:45.001 } 00:16:45.001 ] 00:16:45.001 }' 00:16:45.001 19:14:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:45.001 19:14:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:45.001 19:14:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:45.001 19:14:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:45.001 19:14:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:16:45.001 19:14:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.001 19:14:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:45.001 [2024-11-27 19:14:54.493477] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:45.001 [2024-11-27 19:14:54.548186] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:45.001 [2024-11-27 19:14:54.548251] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:45.001 [2024-11-27 19:14:54.548267] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:45.001 [2024-11-27 19:14:54.548278] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:45.001 19:14:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.001 19:14:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:45.001 19:14:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:45.001 19:14:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:45.001 19:14:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:45.001 19:14:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:45.001 19:14:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:45.001 19:14:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:45.001 19:14:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:45.001 19:14:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:45.001 19:14:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:45.001 19:14:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:45.001 19:14:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.001 19:14:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:45.001 19:14:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:45.001 19:14:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.261 19:14:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:45.261 "name": "raid_bdev1", 00:16:45.261 "uuid": "5080831b-eec3-46ad-bdc6-264a6f733420", 00:16:45.261 "strip_size_kb": 64, 00:16:45.261 "state": "online", 00:16:45.261 "raid_level": "raid5f", 00:16:45.261 "superblock": true, 00:16:45.261 "num_base_bdevs": 3, 00:16:45.261 "num_base_bdevs_discovered": 2, 00:16:45.261 "num_base_bdevs_operational": 2, 00:16:45.261 "base_bdevs_list": [ 00:16:45.261 { 00:16:45.261 "name": null, 00:16:45.261 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:45.261 "is_configured": false, 00:16:45.261 "data_offset": 0, 00:16:45.261 "data_size": 63488 00:16:45.261 }, 00:16:45.261 { 00:16:45.261 "name": "BaseBdev2", 00:16:45.261 "uuid": "c1d0c42a-13f4-5a68-ad3b-69525dc26023", 00:16:45.261 "is_configured": true, 00:16:45.261 "data_offset": 2048, 00:16:45.261 "data_size": 63488 00:16:45.261 }, 00:16:45.261 { 00:16:45.261 "name": "BaseBdev3", 00:16:45.261 "uuid": "16a69375-aa6e-5fc7-809a-dc2ba6bf746e", 00:16:45.261 "is_configured": true, 00:16:45.261 "data_offset": 2048, 00:16:45.261 "data_size": 63488 00:16:45.261 } 00:16:45.261 ] 00:16:45.261 }' 00:16:45.261 19:14:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:45.261 19:14:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:45.521 19:14:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:45.521 19:14:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.521 19:14:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:45.521 [2024-11-27 19:14:55.012849] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:45.521 [2024-11-27 19:14:55.013004] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:45.521 [2024-11-27 19:14:55.013053] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:16:45.521 [2024-11-27 19:14:55.013096] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:45.521 [2024-11-27 19:14:55.013767] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:45.521 [2024-11-27 19:14:55.013842] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:45.521 [2024-11-27 19:14:55.014003] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:45.521 [2024-11-27 19:14:55.014058] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:45.521 [2024-11-27 19:14:55.014111] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:45.521 [2024-11-27 19:14:55.014169] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:45.521 [2024-11-27 19:14:55.030643] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000478a0 00:16:45.521 spare 00:16:45.521 19:14:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.521 19:14:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:16:45.521 [2024-11-27 19:14:55.038442] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:46.460 19:14:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:46.460 19:14:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:46.460 19:14:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:46.460 19:14:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:46.460 19:14:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:46.460 19:14:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:46.460 19:14:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:46.460 19:14:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.460 19:14:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.460 19:14:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.460 19:14:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:46.460 "name": "raid_bdev1", 00:16:46.460 "uuid": "5080831b-eec3-46ad-bdc6-264a6f733420", 00:16:46.460 "strip_size_kb": 64, 00:16:46.460 "state": "online", 00:16:46.460 "raid_level": "raid5f", 00:16:46.460 "superblock": true, 00:16:46.460 "num_base_bdevs": 3, 00:16:46.460 "num_base_bdevs_discovered": 3, 00:16:46.460 "num_base_bdevs_operational": 3, 00:16:46.460 "process": { 00:16:46.460 "type": "rebuild", 00:16:46.460 "target": "spare", 00:16:46.460 "progress": { 00:16:46.460 "blocks": 20480, 00:16:46.460 "percent": 16 00:16:46.460 } 00:16:46.460 }, 00:16:46.460 "base_bdevs_list": [ 00:16:46.460 { 00:16:46.460 "name": "spare", 00:16:46.460 "uuid": "37c5cd70-36ae-5304-8016-e340d328342c", 00:16:46.460 "is_configured": true, 00:16:46.460 "data_offset": 2048, 00:16:46.460 "data_size": 63488 00:16:46.460 }, 00:16:46.460 { 00:16:46.460 "name": "BaseBdev2", 00:16:46.460 "uuid": "c1d0c42a-13f4-5a68-ad3b-69525dc26023", 00:16:46.460 "is_configured": true, 00:16:46.460 "data_offset": 2048, 00:16:46.460 "data_size": 63488 00:16:46.460 }, 00:16:46.460 { 00:16:46.460 "name": "BaseBdev3", 00:16:46.460 "uuid": "16a69375-aa6e-5fc7-809a-dc2ba6bf746e", 00:16:46.460 "is_configured": true, 00:16:46.460 "data_offset": 2048, 00:16:46.460 "data_size": 63488 00:16:46.460 } 00:16:46.460 ] 00:16:46.460 }' 00:16:46.460 19:14:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:46.720 19:14:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:46.720 19:14:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:46.720 19:14:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:46.720 19:14:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:16:46.720 19:14:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.720 19:14:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.720 [2024-11-27 19:14:56.177332] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:46.720 [2024-11-27 19:14:56.247837] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:46.720 [2024-11-27 19:14:56.247889] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:46.720 [2024-11-27 19:14:56.247908] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:46.720 [2024-11-27 19:14:56.247915] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:46.720 19:14:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.720 19:14:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:46.720 19:14:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:46.720 19:14:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:46.720 19:14:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:46.720 19:14:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:46.720 19:14:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:46.720 19:14:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:46.720 19:14:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:46.720 19:14:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:46.720 19:14:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:46.720 19:14:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:46.720 19:14:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:46.720 19:14:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.720 19:14:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.720 19:14:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.720 19:14:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:46.720 "name": "raid_bdev1", 00:16:46.720 "uuid": "5080831b-eec3-46ad-bdc6-264a6f733420", 00:16:46.720 "strip_size_kb": 64, 00:16:46.720 "state": "online", 00:16:46.720 "raid_level": "raid5f", 00:16:46.720 "superblock": true, 00:16:46.720 "num_base_bdevs": 3, 00:16:46.720 "num_base_bdevs_discovered": 2, 00:16:46.720 "num_base_bdevs_operational": 2, 00:16:46.720 "base_bdevs_list": [ 00:16:46.720 { 00:16:46.720 "name": null, 00:16:46.720 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:46.720 "is_configured": false, 00:16:46.720 "data_offset": 0, 00:16:46.720 "data_size": 63488 00:16:46.720 }, 00:16:46.720 { 00:16:46.720 "name": "BaseBdev2", 00:16:46.720 "uuid": "c1d0c42a-13f4-5a68-ad3b-69525dc26023", 00:16:46.720 "is_configured": true, 00:16:46.720 "data_offset": 2048, 00:16:46.720 "data_size": 63488 00:16:46.720 }, 00:16:46.720 { 00:16:46.720 "name": "BaseBdev3", 00:16:46.720 "uuid": "16a69375-aa6e-5fc7-809a-dc2ba6bf746e", 00:16:46.720 "is_configured": true, 00:16:46.720 "data_offset": 2048, 00:16:46.720 "data_size": 63488 00:16:46.720 } 00:16:46.720 ] 00:16:46.720 }' 00:16:46.720 19:14:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:46.720 19:14:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:47.290 19:14:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:47.290 19:14:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:47.290 19:14:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:47.290 19:14:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:47.290 19:14:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:47.290 19:14:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:47.290 19:14:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.290 19:14:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:47.290 19:14:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:47.290 19:14:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.290 19:14:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:47.290 "name": "raid_bdev1", 00:16:47.290 "uuid": "5080831b-eec3-46ad-bdc6-264a6f733420", 00:16:47.290 "strip_size_kb": 64, 00:16:47.290 "state": "online", 00:16:47.290 "raid_level": "raid5f", 00:16:47.290 "superblock": true, 00:16:47.290 "num_base_bdevs": 3, 00:16:47.290 "num_base_bdevs_discovered": 2, 00:16:47.290 "num_base_bdevs_operational": 2, 00:16:47.290 "base_bdevs_list": [ 00:16:47.290 { 00:16:47.290 "name": null, 00:16:47.290 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:47.290 "is_configured": false, 00:16:47.290 "data_offset": 0, 00:16:47.290 "data_size": 63488 00:16:47.290 }, 00:16:47.290 { 00:16:47.290 "name": "BaseBdev2", 00:16:47.290 "uuid": "c1d0c42a-13f4-5a68-ad3b-69525dc26023", 00:16:47.291 "is_configured": true, 00:16:47.291 "data_offset": 2048, 00:16:47.291 "data_size": 63488 00:16:47.291 }, 00:16:47.291 { 00:16:47.291 "name": "BaseBdev3", 00:16:47.291 "uuid": "16a69375-aa6e-5fc7-809a-dc2ba6bf746e", 00:16:47.291 "is_configured": true, 00:16:47.291 "data_offset": 2048, 00:16:47.291 "data_size": 63488 00:16:47.291 } 00:16:47.291 ] 00:16:47.291 }' 00:16:47.291 19:14:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:47.291 19:14:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:47.291 19:14:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:47.291 19:14:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:47.291 19:14:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:16:47.291 19:14:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.291 19:14:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:47.291 19:14:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.291 19:14:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:47.291 19:14:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.291 19:14:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:47.291 [2024-11-27 19:14:56.875757] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:47.291 [2024-11-27 19:14:56.875865] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:47.291 [2024-11-27 19:14:56.875912] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:16:47.291 [2024-11-27 19:14:56.875942] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:47.291 [2024-11-27 19:14:56.876569] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:47.291 [2024-11-27 19:14:56.876641] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:47.291 [2024-11-27 19:14:56.876797] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:16:47.291 [2024-11-27 19:14:56.876849] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:47.291 [2024-11-27 19:14:56.876923] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:47.291 [2024-11-27 19:14:56.877006] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:16:47.291 BaseBdev1 00:16:47.291 19:14:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.291 19:14:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:16:48.673 19:14:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:48.673 19:14:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:48.673 19:14:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:48.673 19:14:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:48.673 19:14:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:48.673 19:14:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:48.673 19:14:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:48.673 19:14:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:48.673 19:14:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:48.673 19:14:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:48.673 19:14:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:48.673 19:14:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:48.673 19:14:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.673 19:14:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:48.673 19:14:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.673 19:14:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:48.673 "name": "raid_bdev1", 00:16:48.673 "uuid": "5080831b-eec3-46ad-bdc6-264a6f733420", 00:16:48.673 "strip_size_kb": 64, 00:16:48.673 "state": "online", 00:16:48.673 "raid_level": "raid5f", 00:16:48.673 "superblock": true, 00:16:48.673 "num_base_bdevs": 3, 00:16:48.673 "num_base_bdevs_discovered": 2, 00:16:48.673 "num_base_bdevs_operational": 2, 00:16:48.673 "base_bdevs_list": [ 00:16:48.673 { 00:16:48.673 "name": null, 00:16:48.673 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:48.673 "is_configured": false, 00:16:48.673 "data_offset": 0, 00:16:48.673 "data_size": 63488 00:16:48.673 }, 00:16:48.673 { 00:16:48.673 "name": "BaseBdev2", 00:16:48.673 "uuid": "c1d0c42a-13f4-5a68-ad3b-69525dc26023", 00:16:48.673 "is_configured": true, 00:16:48.673 "data_offset": 2048, 00:16:48.673 "data_size": 63488 00:16:48.673 }, 00:16:48.673 { 00:16:48.673 "name": "BaseBdev3", 00:16:48.673 "uuid": "16a69375-aa6e-5fc7-809a-dc2ba6bf746e", 00:16:48.673 "is_configured": true, 00:16:48.673 "data_offset": 2048, 00:16:48.673 "data_size": 63488 00:16:48.673 } 00:16:48.673 ] 00:16:48.673 }' 00:16:48.673 19:14:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:48.673 19:14:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:48.933 19:14:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:48.933 19:14:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:48.933 19:14:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:48.933 19:14:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:48.933 19:14:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:48.933 19:14:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:48.933 19:14:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:48.933 19:14:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.933 19:14:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:48.933 19:14:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.933 19:14:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:48.933 "name": "raid_bdev1", 00:16:48.933 "uuid": "5080831b-eec3-46ad-bdc6-264a6f733420", 00:16:48.933 "strip_size_kb": 64, 00:16:48.933 "state": "online", 00:16:48.933 "raid_level": "raid5f", 00:16:48.933 "superblock": true, 00:16:48.933 "num_base_bdevs": 3, 00:16:48.933 "num_base_bdevs_discovered": 2, 00:16:48.933 "num_base_bdevs_operational": 2, 00:16:48.933 "base_bdevs_list": [ 00:16:48.933 { 00:16:48.933 "name": null, 00:16:48.933 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:48.933 "is_configured": false, 00:16:48.933 "data_offset": 0, 00:16:48.933 "data_size": 63488 00:16:48.933 }, 00:16:48.933 { 00:16:48.933 "name": "BaseBdev2", 00:16:48.933 "uuid": "c1d0c42a-13f4-5a68-ad3b-69525dc26023", 00:16:48.933 "is_configured": true, 00:16:48.933 "data_offset": 2048, 00:16:48.933 "data_size": 63488 00:16:48.933 }, 00:16:48.933 { 00:16:48.933 "name": "BaseBdev3", 00:16:48.933 "uuid": "16a69375-aa6e-5fc7-809a-dc2ba6bf746e", 00:16:48.933 "is_configured": true, 00:16:48.933 "data_offset": 2048, 00:16:48.934 "data_size": 63488 00:16:48.934 } 00:16:48.934 ] 00:16:48.934 }' 00:16:48.934 19:14:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:48.934 19:14:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:48.934 19:14:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:48.934 19:14:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:48.934 19:14:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:48.934 19:14:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:16:48.934 19:14:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:48.934 19:14:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:48.934 19:14:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:48.934 19:14:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:48.934 19:14:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:48.934 19:14:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:48.934 19:14:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.934 19:14:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:48.934 [2024-11-27 19:14:58.493215] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:48.934 [2024-11-27 19:14:58.493492] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:48.934 [2024-11-27 19:14:58.493551] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:48.934 request: 00:16:48.934 { 00:16:48.934 "base_bdev": "BaseBdev1", 00:16:48.934 "raid_bdev": "raid_bdev1", 00:16:48.934 "method": "bdev_raid_add_base_bdev", 00:16:48.934 "req_id": 1 00:16:48.934 } 00:16:48.934 Got JSON-RPC error response 00:16:48.934 response: 00:16:48.934 { 00:16:48.934 "code": -22, 00:16:48.934 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:16:48.934 } 00:16:48.934 19:14:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:48.934 19:14:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:16:48.934 19:14:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:48.934 19:14:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:48.934 19:14:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:48.934 19:14:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:16:49.873 19:14:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:49.873 19:14:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:49.873 19:14:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:49.873 19:14:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:49.873 19:14:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:50.132 19:14:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:50.132 19:14:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:50.132 19:14:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:50.132 19:14:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:50.132 19:14:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:50.132 19:14:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:50.132 19:14:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:50.132 19:14:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.132 19:14:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:50.132 19:14:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.132 19:14:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:50.132 "name": "raid_bdev1", 00:16:50.132 "uuid": "5080831b-eec3-46ad-bdc6-264a6f733420", 00:16:50.132 "strip_size_kb": 64, 00:16:50.132 "state": "online", 00:16:50.132 "raid_level": "raid5f", 00:16:50.132 "superblock": true, 00:16:50.132 "num_base_bdevs": 3, 00:16:50.132 "num_base_bdevs_discovered": 2, 00:16:50.132 "num_base_bdevs_operational": 2, 00:16:50.132 "base_bdevs_list": [ 00:16:50.132 { 00:16:50.132 "name": null, 00:16:50.132 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:50.132 "is_configured": false, 00:16:50.132 "data_offset": 0, 00:16:50.132 "data_size": 63488 00:16:50.132 }, 00:16:50.132 { 00:16:50.132 "name": "BaseBdev2", 00:16:50.132 "uuid": "c1d0c42a-13f4-5a68-ad3b-69525dc26023", 00:16:50.132 "is_configured": true, 00:16:50.132 "data_offset": 2048, 00:16:50.132 "data_size": 63488 00:16:50.132 }, 00:16:50.132 { 00:16:50.132 "name": "BaseBdev3", 00:16:50.132 "uuid": "16a69375-aa6e-5fc7-809a-dc2ba6bf746e", 00:16:50.132 "is_configured": true, 00:16:50.132 "data_offset": 2048, 00:16:50.132 "data_size": 63488 00:16:50.132 } 00:16:50.132 ] 00:16:50.133 }' 00:16:50.133 19:14:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:50.133 19:14:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:50.392 19:14:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:50.392 19:14:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:50.392 19:14:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:50.392 19:14:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:50.392 19:14:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:50.392 19:14:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:50.393 19:14:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:50.393 19:14:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.393 19:14:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:50.393 19:14:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.393 19:14:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:50.393 "name": "raid_bdev1", 00:16:50.393 "uuid": "5080831b-eec3-46ad-bdc6-264a6f733420", 00:16:50.393 "strip_size_kb": 64, 00:16:50.393 "state": "online", 00:16:50.393 "raid_level": "raid5f", 00:16:50.393 "superblock": true, 00:16:50.393 "num_base_bdevs": 3, 00:16:50.393 "num_base_bdevs_discovered": 2, 00:16:50.393 "num_base_bdevs_operational": 2, 00:16:50.393 "base_bdevs_list": [ 00:16:50.393 { 00:16:50.393 "name": null, 00:16:50.393 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:50.393 "is_configured": false, 00:16:50.393 "data_offset": 0, 00:16:50.393 "data_size": 63488 00:16:50.393 }, 00:16:50.393 { 00:16:50.393 "name": "BaseBdev2", 00:16:50.393 "uuid": "c1d0c42a-13f4-5a68-ad3b-69525dc26023", 00:16:50.393 "is_configured": true, 00:16:50.393 "data_offset": 2048, 00:16:50.393 "data_size": 63488 00:16:50.393 }, 00:16:50.393 { 00:16:50.393 "name": "BaseBdev3", 00:16:50.393 "uuid": "16a69375-aa6e-5fc7-809a-dc2ba6bf746e", 00:16:50.393 "is_configured": true, 00:16:50.393 "data_offset": 2048, 00:16:50.393 "data_size": 63488 00:16:50.393 } 00:16:50.393 ] 00:16:50.393 }' 00:16:50.393 19:14:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:50.393 19:15:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:50.393 19:15:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:50.652 19:15:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:50.652 19:15:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 82086 00:16:50.652 19:15:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 82086 ']' 00:16:50.652 19:15:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 82086 00:16:50.652 19:15:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:16:50.652 19:15:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:50.652 19:15:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82086 00:16:50.652 19:15:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:50.652 19:15:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:50.652 19:15:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82086' 00:16:50.652 killing process with pid 82086 00:16:50.652 19:15:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 82086 00:16:50.652 Received shutdown signal, test time was about 60.000000 seconds 00:16:50.652 00:16:50.652 Latency(us) 00:16:50.652 [2024-11-27T19:15:00.288Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:50.652 [2024-11-27T19:15:00.288Z] =================================================================================================================== 00:16:50.652 [2024-11-27T19:15:00.288Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:50.652 [2024-11-27 19:15:00.107719] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:50.652 [2024-11-27 19:15:00.107880] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:50.652 19:15:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 82086 00:16:50.652 [2024-11-27 19:15:00.107963] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:50.652 [2024-11-27 19:15:00.107979] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:16:50.911 [2024-11-27 19:15:00.532162] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:52.293 19:15:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:16:52.293 00:16:52.293 real 0m23.197s 00:16:52.293 user 0m29.328s 00:16:52.293 sys 0m2.969s 00:16:52.293 ************************************ 00:16:52.293 END TEST raid5f_rebuild_test_sb 00:16:52.293 ************************************ 00:16:52.293 19:15:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:52.293 19:15:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:52.293 19:15:01 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:16:52.293 19:15:01 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 4 false 00:16:52.293 19:15:01 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:16:52.293 19:15:01 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:52.293 19:15:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:52.293 ************************************ 00:16:52.293 START TEST raid5f_state_function_test 00:16:52.293 ************************************ 00:16:52.293 19:15:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 4 false 00:16:52.293 19:15:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:16:52.293 19:15:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:16:52.293 19:15:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:16:52.293 19:15:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:16:52.293 19:15:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:16:52.293 19:15:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:52.293 19:15:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:16:52.293 19:15:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:52.293 19:15:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:52.293 19:15:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:16:52.293 19:15:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:52.293 19:15:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:52.293 19:15:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:16:52.293 19:15:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:52.293 19:15:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:52.293 19:15:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:16:52.293 19:15:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:52.293 19:15:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:52.293 19:15:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:52.293 19:15:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:16:52.293 19:15:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:16:52.293 19:15:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:16:52.293 19:15:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:16:52.293 19:15:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:16:52.293 19:15:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:16:52.293 19:15:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:16:52.293 19:15:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:16:52.293 19:15:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:16:52.293 19:15:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:16:52.293 19:15:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=82839 00:16:52.294 19:15:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:16:52.294 19:15:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 82839' 00:16:52.294 Process raid pid: 82839 00:16:52.294 19:15:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 82839 00:16:52.294 19:15:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 82839 ']' 00:16:52.294 19:15:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:52.294 19:15:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:52.294 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:52.294 19:15:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:52.294 19:15:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:52.294 19:15:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.294 [2024-11-27 19:15:01.875944] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:16:52.294 [2024-11-27 19:15:01.876054] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:52.553 [2024-11-27 19:15:02.050666] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:52.553 [2024-11-27 19:15:02.187720] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:52.813 [2024-11-27 19:15:02.425628] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:52.813 [2024-11-27 19:15:02.425673] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:53.072 19:15:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:53.072 19:15:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:16:53.072 19:15:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:53.072 19:15:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.072 19:15:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.072 [2024-11-27 19:15:02.706188] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:53.072 [2024-11-27 19:15:02.706255] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:53.072 [2024-11-27 19:15:02.706266] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:53.073 [2024-11-27 19:15:02.706276] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:53.073 [2024-11-27 19:15:02.706283] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:53.073 [2024-11-27 19:15:02.706292] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:53.073 [2024-11-27 19:15:02.706298] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:53.073 [2024-11-27 19:15:02.706307] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:53.332 19:15:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.332 19:15:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:53.332 19:15:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:53.332 19:15:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:53.332 19:15:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:53.332 19:15:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:53.332 19:15:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:53.332 19:15:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:53.332 19:15:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:53.332 19:15:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:53.332 19:15:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:53.332 19:15:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:53.332 19:15:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:53.332 19:15:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.332 19:15:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.332 19:15:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.332 19:15:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:53.332 "name": "Existed_Raid", 00:16:53.332 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:53.333 "strip_size_kb": 64, 00:16:53.333 "state": "configuring", 00:16:53.333 "raid_level": "raid5f", 00:16:53.333 "superblock": false, 00:16:53.333 "num_base_bdevs": 4, 00:16:53.333 "num_base_bdevs_discovered": 0, 00:16:53.333 "num_base_bdevs_operational": 4, 00:16:53.333 "base_bdevs_list": [ 00:16:53.333 { 00:16:53.333 "name": "BaseBdev1", 00:16:53.333 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:53.333 "is_configured": false, 00:16:53.333 "data_offset": 0, 00:16:53.333 "data_size": 0 00:16:53.333 }, 00:16:53.333 { 00:16:53.333 "name": "BaseBdev2", 00:16:53.333 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:53.333 "is_configured": false, 00:16:53.333 "data_offset": 0, 00:16:53.333 "data_size": 0 00:16:53.333 }, 00:16:53.333 { 00:16:53.333 "name": "BaseBdev3", 00:16:53.333 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:53.333 "is_configured": false, 00:16:53.333 "data_offset": 0, 00:16:53.333 "data_size": 0 00:16:53.333 }, 00:16:53.333 { 00:16:53.333 "name": "BaseBdev4", 00:16:53.333 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:53.333 "is_configured": false, 00:16:53.333 "data_offset": 0, 00:16:53.333 "data_size": 0 00:16:53.333 } 00:16:53.333 ] 00:16:53.333 }' 00:16:53.333 19:15:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:53.333 19:15:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.593 19:15:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:53.593 19:15:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.593 19:15:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.593 [2024-11-27 19:15:03.173285] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:53.593 [2024-11-27 19:15:03.173391] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:16:53.593 19:15:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.593 19:15:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:53.593 19:15:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.593 19:15:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.593 [2024-11-27 19:15:03.185280] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:53.593 [2024-11-27 19:15:03.185382] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:53.593 [2024-11-27 19:15:03.185410] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:53.593 [2024-11-27 19:15:03.185433] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:53.593 [2024-11-27 19:15:03.185451] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:53.593 [2024-11-27 19:15:03.185472] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:53.593 [2024-11-27 19:15:03.185489] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:53.593 [2024-11-27 19:15:03.185517] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:53.593 19:15:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.593 19:15:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:53.593 19:15:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.593 19:15:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.853 [2024-11-27 19:15:03.239792] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:53.853 BaseBdev1 00:16:53.853 19:15:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.853 19:15:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:16:53.853 19:15:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:16:53.853 19:15:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:53.853 19:15:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:53.853 19:15:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:53.853 19:15:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:53.853 19:15:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:53.853 19:15:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.853 19:15:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.853 19:15:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.853 19:15:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:53.853 19:15:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.853 19:15:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.853 [ 00:16:53.853 { 00:16:53.853 "name": "BaseBdev1", 00:16:53.853 "aliases": [ 00:16:53.853 "ec9dd9f0-a7e9-4765-9377-052ff92d3981" 00:16:53.853 ], 00:16:53.853 "product_name": "Malloc disk", 00:16:53.853 "block_size": 512, 00:16:53.853 "num_blocks": 65536, 00:16:53.853 "uuid": "ec9dd9f0-a7e9-4765-9377-052ff92d3981", 00:16:53.853 "assigned_rate_limits": { 00:16:53.853 "rw_ios_per_sec": 0, 00:16:53.853 "rw_mbytes_per_sec": 0, 00:16:53.853 "r_mbytes_per_sec": 0, 00:16:53.853 "w_mbytes_per_sec": 0 00:16:53.853 }, 00:16:53.853 "claimed": true, 00:16:53.853 "claim_type": "exclusive_write", 00:16:53.853 "zoned": false, 00:16:53.853 "supported_io_types": { 00:16:53.853 "read": true, 00:16:53.853 "write": true, 00:16:53.853 "unmap": true, 00:16:53.853 "flush": true, 00:16:53.853 "reset": true, 00:16:53.853 "nvme_admin": false, 00:16:53.853 "nvme_io": false, 00:16:53.853 "nvme_io_md": false, 00:16:53.853 "write_zeroes": true, 00:16:53.853 "zcopy": true, 00:16:53.853 "get_zone_info": false, 00:16:53.853 "zone_management": false, 00:16:53.853 "zone_append": false, 00:16:53.853 "compare": false, 00:16:53.853 "compare_and_write": false, 00:16:53.853 "abort": true, 00:16:53.853 "seek_hole": false, 00:16:53.853 "seek_data": false, 00:16:53.853 "copy": true, 00:16:53.853 "nvme_iov_md": false 00:16:53.853 }, 00:16:53.853 "memory_domains": [ 00:16:53.853 { 00:16:53.853 "dma_device_id": "system", 00:16:53.853 "dma_device_type": 1 00:16:53.853 }, 00:16:53.853 { 00:16:53.853 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:53.853 "dma_device_type": 2 00:16:53.853 } 00:16:53.853 ], 00:16:53.853 "driver_specific": {} 00:16:53.853 } 00:16:53.853 ] 00:16:53.853 19:15:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.853 19:15:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:53.853 19:15:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:53.853 19:15:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:53.853 19:15:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:53.853 19:15:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:53.853 19:15:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:53.853 19:15:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:53.853 19:15:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:53.853 19:15:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:53.853 19:15:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:53.853 19:15:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:53.853 19:15:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:53.853 19:15:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:53.853 19:15:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.853 19:15:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.853 19:15:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.853 19:15:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:53.853 "name": "Existed_Raid", 00:16:53.853 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:53.853 "strip_size_kb": 64, 00:16:53.853 "state": "configuring", 00:16:53.853 "raid_level": "raid5f", 00:16:53.853 "superblock": false, 00:16:53.853 "num_base_bdevs": 4, 00:16:53.853 "num_base_bdevs_discovered": 1, 00:16:53.853 "num_base_bdevs_operational": 4, 00:16:53.853 "base_bdevs_list": [ 00:16:53.853 { 00:16:53.853 "name": "BaseBdev1", 00:16:53.853 "uuid": "ec9dd9f0-a7e9-4765-9377-052ff92d3981", 00:16:53.853 "is_configured": true, 00:16:53.853 "data_offset": 0, 00:16:53.853 "data_size": 65536 00:16:53.853 }, 00:16:53.853 { 00:16:53.853 "name": "BaseBdev2", 00:16:53.853 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:53.853 "is_configured": false, 00:16:53.853 "data_offset": 0, 00:16:53.853 "data_size": 0 00:16:53.853 }, 00:16:53.853 { 00:16:53.853 "name": "BaseBdev3", 00:16:53.853 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:53.853 "is_configured": false, 00:16:53.853 "data_offset": 0, 00:16:53.853 "data_size": 0 00:16:53.853 }, 00:16:53.853 { 00:16:53.853 "name": "BaseBdev4", 00:16:53.853 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:53.853 "is_configured": false, 00:16:53.853 "data_offset": 0, 00:16:53.853 "data_size": 0 00:16:53.853 } 00:16:53.853 ] 00:16:53.853 }' 00:16:53.853 19:15:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:53.853 19:15:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.114 19:15:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:54.114 19:15:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.114 19:15:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.114 [2024-11-27 19:15:03.719189] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:54.114 [2024-11-27 19:15:03.719235] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:16:54.114 19:15:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.114 19:15:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:54.114 19:15:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.114 19:15:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.114 [2024-11-27 19:15:03.727230] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:54.114 [2024-11-27 19:15:03.729364] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:54.114 [2024-11-27 19:15:03.729448] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:54.114 [2024-11-27 19:15:03.729462] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:54.114 [2024-11-27 19:15:03.729475] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:54.114 [2024-11-27 19:15:03.729482] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:54.114 [2024-11-27 19:15:03.729490] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:54.114 19:15:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.114 19:15:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:16:54.114 19:15:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:54.114 19:15:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:54.114 19:15:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:54.114 19:15:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:54.114 19:15:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:54.114 19:15:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:54.114 19:15:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:54.114 19:15:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:54.114 19:15:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:54.114 19:15:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:54.114 19:15:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:54.114 19:15:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:54.114 19:15:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.114 19:15:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.114 19:15:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:54.373 19:15:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.373 19:15:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:54.373 "name": "Existed_Raid", 00:16:54.373 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:54.373 "strip_size_kb": 64, 00:16:54.373 "state": "configuring", 00:16:54.373 "raid_level": "raid5f", 00:16:54.373 "superblock": false, 00:16:54.373 "num_base_bdevs": 4, 00:16:54.373 "num_base_bdevs_discovered": 1, 00:16:54.373 "num_base_bdevs_operational": 4, 00:16:54.373 "base_bdevs_list": [ 00:16:54.373 { 00:16:54.373 "name": "BaseBdev1", 00:16:54.373 "uuid": "ec9dd9f0-a7e9-4765-9377-052ff92d3981", 00:16:54.373 "is_configured": true, 00:16:54.373 "data_offset": 0, 00:16:54.373 "data_size": 65536 00:16:54.373 }, 00:16:54.373 { 00:16:54.373 "name": "BaseBdev2", 00:16:54.373 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:54.373 "is_configured": false, 00:16:54.373 "data_offset": 0, 00:16:54.373 "data_size": 0 00:16:54.374 }, 00:16:54.374 { 00:16:54.374 "name": "BaseBdev3", 00:16:54.374 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:54.374 "is_configured": false, 00:16:54.374 "data_offset": 0, 00:16:54.374 "data_size": 0 00:16:54.374 }, 00:16:54.374 { 00:16:54.374 "name": "BaseBdev4", 00:16:54.374 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:54.374 "is_configured": false, 00:16:54.374 "data_offset": 0, 00:16:54.374 "data_size": 0 00:16:54.374 } 00:16:54.374 ] 00:16:54.374 }' 00:16:54.374 19:15:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:54.374 19:15:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.633 19:15:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:54.634 19:15:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.634 19:15:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.634 [2024-11-27 19:15:04.210719] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:54.634 BaseBdev2 00:16:54.634 19:15:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.634 19:15:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:16:54.634 19:15:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:16:54.634 19:15:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:54.634 19:15:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:54.634 19:15:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:54.634 19:15:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:54.634 19:15:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:54.634 19:15:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.634 19:15:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.634 19:15:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.634 19:15:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:54.634 19:15:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.634 19:15:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.634 [ 00:16:54.634 { 00:16:54.634 "name": "BaseBdev2", 00:16:54.634 "aliases": [ 00:16:54.634 "ec1c15be-d43a-43a4-95e9-294ede198d58" 00:16:54.634 ], 00:16:54.634 "product_name": "Malloc disk", 00:16:54.634 "block_size": 512, 00:16:54.634 "num_blocks": 65536, 00:16:54.634 "uuid": "ec1c15be-d43a-43a4-95e9-294ede198d58", 00:16:54.634 "assigned_rate_limits": { 00:16:54.634 "rw_ios_per_sec": 0, 00:16:54.634 "rw_mbytes_per_sec": 0, 00:16:54.634 "r_mbytes_per_sec": 0, 00:16:54.634 "w_mbytes_per_sec": 0 00:16:54.634 }, 00:16:54.634 "claimed": true, 00:16:54.634 "claim_type": "exclusive_write", 00:16:54.634 "zoned": false, 00:16:54.634 "supported_io_types": { 00:16:54.634 "read": true, 00:16:54.634 "write": true, 00:16:54.634 "unmap": true, 00:16:54.634 "flush": true, 00:16:54.634 "reset": true, 00:16:54.634 "nvme_admin": false, 00:16:54.634 "nvme_io": false, 00:16:54.634 "nvme_io_md": false, 00:16:54.634 "write_zeroes": true, 00:16:54.634 "zcopy": true, 00:16:54.634 "get_zone_info": false, 00:16:54.634 "zone_management": false, 00:16:54.634 "zone_append": false, 00:16:54.634 "compare": false, 00:16:54.634 "compare_and_write": false, 00:16:54.634 "abort": true, 00:16:54.634 "seek_hole": false, 00:16:54.634 "seek_data": false, 00:16:54.634 "copy": true, 00:16:54.634 "nvme_iov_md": false 00:16:54.634 }, 00:16:54.634 "memory_domains": [ 00:16:54.634 { 00:16:54.634 "dma_device_id": "system", 00:16:54.634 "dma_device_type": 1 00:16:54.634 }, 00:16:54.634 { 00:16:54.634 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:54.634 "dma_device_type": 2 00:16:54.634 } 00:16:54.634 ], 00:16:54.634 "driver_specific": {} 00:16:54.634 } 00:16:54.634 ] 00:16:54.634 19:15:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.634 19:15:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:54.634 19:15:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:54.634 19:15:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:54.634 19:15:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:54.634 19:15:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:54.634 19:15:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:54.634 19:15:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:54.634 19:15:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:54.634 19:15:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:54.634 19:15:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:54.634 19:15:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:54.634 19:15:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:54.634 19:15:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:54.634 19:15:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:54.634 19:15:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.634 19:15:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.634 19:15:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:54.894 19:15:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.894 19:15:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:54.894 "name": "Existed_Raid", 00:16:54.894 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:54.894 "strip_size_kb": 64, 00:16:54.894 "state": "configuring", 00:16:54.894 "raid_level": "raid5f", 00:16:54.894 "superblock": false, 00:16:54.894 "num_base_bdevs": 4, 00:16:54.894 "num_base_bdevs_discovered": 2, 00:16:54.894 "num_base_bdevs_operational": 4, 00:16:54.894 "base_bdevs_list": [ 00:16:54.894 { 00:16:54.894 "name": "BaseBdev1", 00:16:54.894 "uuid": "ec9dd9f0-a7e9-4765-9377-052ff92d3981", 00:16:54.894 "is_configured": true, 00:16:54.894 "data_offset": 0, 00:16:54.894 "data_size": 65536 00:16:54.894 }, 00:16:54.894 { 00:16:54.894 "name": "BaseBdev2", 00:16:54.894 "uuid": "ec1c15be-d43a-43a4-95e9-294ede198d58", 00:16:54.894 "is_configured": true, 00:16:54.894 "data_offset": 0, 00:16:54.894 "data_size": 65536 00:16:54.894 }, 00:16:54.894 { 00:16:54.894 "name": "BaseBdev3", 00:16:54.894 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:54.894 "is_configured": false, 00:16:54.894 "data_offset": 0, 00:16:54.894 "data_size": 0 00:16:54.894 }, 00:16:54.894 { 00:16:54.894 "name": "BaseBdev4", 00:16:54.894 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:54.894 "is_configured": false, 00:16:54.894 "data_offset": 0, 00:16:54.894 "data_size": 0 00:16:54.894 } 00:16:54.894 ] 00:16:54.894 }' 00:16:54.894 19:15:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:54.894 19:15:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.153 19:15:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:55.153 19:15:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.153 19:15:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.153 [2024-11-27 19:15:04.767842] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:55.153 BaseBdev3 00:16:55.153 19:15:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.153 19:15:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:16:55.153 19:15:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:16:55.153 19:15:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:55.153 19:15:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:55.153 19:15:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:55.153 19:15:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:55.153 19:15:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:55.153 19:15:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.153 19:15:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.153 19:15:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.153 19:15:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:55.153 19:15:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.153 19:15:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.413 [ 00:16:55.413 { 00:16:55.413 "name": "BaseBdev3", 00:16:55.413 "aliases": [ 00:16:55.413 "dd9a92d0-aa18-47d7-80e3-5c9085097b72" 00:16:55.413 ], 00:16:55.413 "product_name": "Malloc disk", 00:16:55.413 "block_size": 512, 00:16:55.413 "num_blocks": 65536, 00:16:55.413 "uuid": "dd9a92d0-aa18-47d7-80e3-5c9085097b72", 00:16:55.413 "assigned_rate_limits": { 00:16:55.413 "rw_ios_per_sec": 0, 00:16:55.413 "rw_mbytes_per_sec": 0, 00:16:55.413 "r_mbytes_per_sec": 0, 00:16:55.413 "w_mbytes_per_sec": 0 00:16:55.413 }, 00:16:55.413 "claimed": true, 00:16:55.413 "claim_type": "exclusive_write", 00:16:55.413 "zoned": false, 00:16:55.413 "supported_io_types": { 00:16:55.413 "read": true, 00:16:55.413 "write": true, 00:16:55.413 "unmap": true, 00:16:55.413 "flush": true, 00:16:55.413 "reset": true, 00:16:55.413 "nvme_admin": false, 00:16:55.413 "nvme_io": false, 00:16:55.413 "nvme_io_md": false, 00:16:55.413 "write_zeroes": true, 00:16:55.413 "zcopy": true, 00:16:55.413 "get_zone_info": false, 00:16:55.413 "zone_management": false, 00:16:55.413 "zone_append": false, 00:16:55.413 "compare": false, 00:16:55.413 "compare_and_write": false, 00:16:55.413 "abort": true, 00:16:55.413 "seek_hole": false, 00:16:55.413 "seek_data": false, 00:16:55.413 "copy": true, 00:16:55.413 "nvme_iov_md": false 00:16:55.413 }, 00:16:55.413 "memory_domains": [ 00:16:55.413 { 00:16:55.413 "dma_device_id": "system", 00:16:55.413 "dma_device_type": 1 00:16:55.413 }, 00:16:55.413 { 00:16:55.413 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:55.413 "dma_device_type": 2 00:16:55.413 } 00:16:55.413 ], 00:16:55.413 "driver_specific": {} 00:16:55.413 } 00:16:55.413 ] 00:16:55.413 19:15:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.413 19:15:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:55.413 19:15:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:55.413 19:15:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:55.413 19:15:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:55.413 19:15:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:55.413 19:15:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:55.413 19:15:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:55.413 19:15:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:55.413 19:15:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:55.413 19:15:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:55.414 19:15:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:55.414 19:15:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:55.414 19:15:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:55.414 19:15:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:55.414 19:15:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.414 19:15:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.414 19:15:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:55.414 19:15:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.414 19:15:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:55.414 "name": "Existed_Raid", 00:16:55.414 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:55.414 "strip_size_kb": 64, 00:16:55.414 "state": "configuring", 00:16:55.414 "raid_level": "raid5f", 00:16:55.414 "superblock": false, 00:16:55.414 "num_base_bdevs": 4, 00:16:55.414 "num_base_bdevs_discovered": 3, 00:16:55.414 "num_base_bdevs_operational": 4, 00:16:55.414 "base_bdevs_list": [ 00:16:55.414 { 00:16:55.414 "name": "BaseBdev1", 00:16:55.414 "uuid": "ec9dd9f0-a7e9-4765-9377-052ff92d3981", 00:16:55.414 "is_configured": true, 00:16:55.414 "data_offset": 0, 00:16:55.414 "data_size": 65536 00:16:55.414 }, 00:16:55.414 { 00:16:55.414 "name": "BaseBdev2", 00:16:55.414 "uuid": "ec1c15be-d43a-43a4-95e9-294ede198d58", 00:16:55.414 "is_configured": true, 00:16:55.414 "data_offset": 0, 00:16:55.414 "data_size": 65536 00:16:55.414 }, 00:16:55.414 { 00:16:55.414 "name": "BaseBdev3", 00:16:55.414 "uuid": "dd9a92d0-aa18-47d7-80e3-5c9085097b72", 00:16:55.414 "is_configured": true, 00:16:55.414 "data_offset": 0, 00:16:55.414 "data_size": 65536 00:16:55.414 }, 00:16:55.414 { 00:16:55.414 "name": "BaseBdev4", 00:16:55.414 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:55.414 "is_configured": false, 00:16:55.414 "data_offset": 0, 00:16:55.414 "data_size": 0 00:16:55.414 } 00:16:55.414 ] 00:16:55.414 }' 00:16:55.414 19:15:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:55.414 19:15:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.674 19:15:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:16:55.674 19:15:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.674 19:15:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.934 [2024-11-27 19:15:05.320486] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:55.934 [2024-11-27 19:15:05.320560] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:55.934 [2024-11-27 19:15:05.320570] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:16:55.934 [2024-11-27 19:15:05.320931] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:55.934 [2024-11-27 19:15:05.328254] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:55.934 [2024-11-27 19:15:05.328279] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:16:55.934 [2024-11-27 19:15:05.328559] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:55.934 BaseBdev4 00:16:55.934 19:15:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.934 19:15:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:16:55.934 19:15:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:16:55.934 19:15:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:55.934 19:15:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:55.934 19:15:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:55.934 19:15:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:55.934 19:15:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:55.934 19:15:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.934 19:15:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.934 19:15:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.934 19:15:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:55.934 19:15:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.934 19:15:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.934 [ 00:16:55.934 { 00:16:55.934 "name": "BaseBdev4", 00:16:55.934 "aliases": [ 00:16:55.934 "0a20772b-f69c-4a23-8627-71a7899fd19a" 00:16:55.934 ], 00:16:55.934 "product_name": "Malloc disk", 00:16:55.934 "block_size": 512, 00:16:55.934 "num_blocks": 65536, 00:16:55.934 "uuid": "0a20772b-f69c-4a23-8627-71a7899fd19a", 00:16:55.934 "assigned_rate_limits": { 00:16:55.934 "rw_ios_per_sec": 0, 00:16:55.934 "rw_mbytes_per_sec": 0, 00:16:55.934 "r_mbytes_per_sec": 0, 00:16:55.934 "w_mbytes_per_sec": 0 00:16:55.934 }, 00:16:55.934 "claimed": true, 00:16:55.934 "claim_type": "exclusive_write", 00:16:55.934 "zoned": false, 00:16:55.934 "supported_io_types": { 00:16:55.934 "read": true, 00:16:55.934 "write": true, 00:16:55.934 "unmap": true, 00:16:55.934 "flush": true, 00:16:55.934 "reset": true, 00:16:55.934 "nvme_admin": false, 00:16:55.934 "nvme_io": false, 00:16:55.934 "nvme_io_md": false, 00:16:55.934 "write_zeroes": true, 00:16:55.934 "zcopy": true, 00:16:55.934 "get_zone_info": false, 00:16:55.934 "zone_management": false, 00:16:55.934 "zone_append": false, 00:16:55.934 "compare": false, 00:16:55.934 "compare_and_write": false, 00:16:55.934 "abort": true, 00:16:55.934 "seek_hole": false, 00:16:55.934 "seek_data": false, 00:16:55.934 "copy": true, 00:16:55.934 "nvme_iov_md": false 00:16:55.934 }, 00:16:55.934 "memory_domains": [ 00:16:55.934 { 00:16:55.934 "dma_device_id": "system", 00:16:55.934 "dma_device_type": 1 00:16:55.934 }, 00:16:55.934 { 00:16:55.934 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:55.934 "dma_device_type": 2 00:16:55.934 } 00:16:55.934 ], 00:16:55.934 "driver_specific": {} 00:16:55.934 } 00:16:55.934 ] 00:16:55.934 19:15:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.934 19:15:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:55.934 19:15:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:55.934 19:15:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:55.934 19:15:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:16:55.934 19:15:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:55.934 19:15:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:55.934 19:15:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:55.934 19:15:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:55.934 19:15:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:55.934 19:15:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:55.934 19:15:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:55.934 19:15:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:55.934 19:15:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:55.934 19:15:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:55.934 19:15:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.934 19:15:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.934 19:15:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:55.934 19:15:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.934 19:15:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:55.934 "name": "Existed_Raid", 00:16:55.934 "uuid": "fc50510e-12d7-4dcb-b482-272a963a0072", 00:16:55.934 "strip_size_kb": 64, 00:16:55.934 "state": "online", 00:16:55.934 "raid_level": "raid5f", 00:16:55.935 "superblock": false, 00:16:55.935 "num_base_bdevs": 4, 00:16:55.935 "num_base_bdevs_discovered": 4, 00:16:55.935 "num_base_bdevs_operational": 4, 00:16:55.935 "base_bdevs_list": [ 00:16:55.935 { 00:16:55.935 "name": "BaseBdev1", 00:16:55.935 "uuid": "ec9dd9f0-a7e9-4765-9377-052ff92d3981", 00:16:55.935 "is_configured": true, 00:16:55.935 "data_offset": 0, 00:16:55.935 "data_size": 65536 00:16:55.935 }, 00:16:55.935 { 00:16:55.935 "name": "BaseBdev2", 00:16:55.935 "uuid": "ec1c15be-d43a-43a4-95e9-294ede198d58", 00:16:55.935 "is_configured": true, 00:16:55.935 "data_offset": 0, 00:16:55.935 "data_size": 65536 00:16:55.935 }, 00:16:55.935 { 00:16:55.935 "name": "BaseBdev3", 00:16:55.935 "uuid": "dd9a92d0-aa18-47d7-80e3-5c9085097b72", 00:16:55.935 "is_configured": true, 00:16:55.935 "data_offset": 0, 00:16:55.935 "data_size": 65536 00:16:55.935 }, 00:16:55.935 { 00:16:55.935 "name": "BaseBdev4", 00:16:55.935 "uuid": "0a20772b-f69c-4a23-8627-71a7899fd19a", 00:16:55.935 "is_configured": true, 00:16:55.935 "data_offset": 0, 00:16:55.935 "data_size": 65536 00:16:55.935 } 00:16:55.935 ] 00:16:55.935 }' 00:16:55.935 19:15:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:55.935 19:15:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.195 19:15:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:16:56.195 19:15:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:56.195 19:15:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:56.195 19:15:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:56.195 19:15:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:56.195 19:15:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:56.195 19:15:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:56.195 19:15:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:56.195 19:15:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.195 19:15:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.195 [2024-11-27 19:15:05.824732] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:56.455 19:15:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.455 19:15:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:56.455 "name": "Existed_Raid", 00:16:56.455 "aliases": [ 00:16:56.455 "fc50510e-12d7-4dcb-b482-272a963a0072" 00:16:56.455 ], 00:16:56.455 "product_name": "Raid Volume", 00:16:56.455 "block_size": 512, 00:16:56.455 "num_blocks": 196608, 00:16:56.455 "uuid": "fc50510e-12d7-4dcb-b482-272a963a0072", 00:16:56.455 "assigned_rate_limits": { 00:16:56.455 "rw_ios_per_sec": 0, 00:16:56.455 "rw_mbytes_per_sec": 0, 00:16:56.455 "r_mbytes_per_sec": 0, 00:16:56.455 "w_mbytes_per_sec": 0 00:16:56.455 }, 00:16:56.455 "claimed": false, 00:16:56.455 "zoned": false, 00:16:56.455 "supported_io_types": { 00:16:56.455 "read": true, 00:16:56.455 "write": true, 00:16:56.455 "unmap": false, 00:16:56.455 "flush": false, 00:16:56.455 "reset": true, 00:16:56.455 "nvme_admin": false, 00:16:56.455 "nvme_io": false, 00:16:56.455 "nvme_io_md": false, 00:16:56.455 "write_zeroes": true, 00:16:56.455 "zcopy": false, 00:16:56.455 "get_zone_info": false, 00:16:56.455 "zone_management": false, 00:16:56.455 "zone_append": false, 00:16:56.455 "compare": false, 00:16:56.455 "compare_and_write": false, 00:16:56.455 "abort": false, 00:16:56.455 "seek_hole": false, 00:16:56.455 "seek_data": false, 00:16:56.455 "copy": false, 00:16:56.455 "nvme_iov_md": false 00:16:56.455 }, 00:16:56.455 "driver_specific": { 00:16:56.455 "raid": { 00:16:56.455 "uuid": "fc50510e-12d7-4dcb-b482-272a963a0072", 00:16:56.455 "strip_size_kb": 64, 00:16:56.455 "state": "online", 00:16:56.455 "raid_level": "raid5f", 00:16:56.455 "superblock": false, 00:16:56.455 "num_base_bdevs": 4, 00:16:56.455 "num_base_bdevs_discovered": 4, 00:16:56.455 "num_base_bdevs_operational": 4, 00:16:56.455 "base_bdevs_list": [ 00:16:56.455 { 00:16:56.455 "name": "BaseBdev1", 00:16:56.455 "uuid": "ec9dd9f0-a7e9-4765-9377-052ff92d3981", 00:16:56.455 "is_configured": true, 00:16:56.455 "data_offset": 0, 00:16:56.455 "data_size": 65536 00:16:56.455 }, 00:16:56.455 { 00:16:56.455 "name": "BaseBdev2", 00:16:56.455 "uuid": "ec1c15be-d43a-43a4-95e9-294ede198d58", 00:16:56.455 "is_configured": true, 00:16:56.455 "data_offset": 0, 00:16:56.455 "data_size": 65536 00:16:56.455 }, 00:16:56.455 { 00:16:56.455 "name": "BaseBdev3", 00:16:56.455 "uuid": "dd9a92d0-aa18-47d7-80e3-5c9085097b72", 00:16:56.455 "is_configured": true, 00:16:56.455 "data_offset": 0, 00:16:56.455 "data_size": 65536 00:16:56.455 }, 00:16:56.455 { 00:16:56.455 "name": "BaseBdev4", 00:16:56.455 "uuid": "0a20772b-f69c-4a23-8627-71a7899fd19a", 00:16:56.455 "is_configured": true, 00:16:56.455 "data_offset": 0, 00:16:56.455 "data_size": 65536 00:16:56.455 } 00:16:56.455 ] 00:16:56.455 } 00:16:56.455 } 00:16:56.455 }' 00:16:56.455 19:15:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:56.455 19:15:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:16:56.455 BaseBdev2 00:16:56.455 BaseBdev3 00:16:56.455 BaseBdev4' 00:16:56.455 19:15:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:56.455 19:15:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:56.455 19:15:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:56.455 19:15:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:56.455 19:15:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:16:56.455 19:15:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.455 19:15:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.455 19:15:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.455 19:15:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:56.455 19:15:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:56.455 19:15:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:56.455 19:15:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:56.455 19:15:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:56.455 19:15:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.455 19:15:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.455 19:15:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.455 19:15:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:56.455 19:15:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:56.455 19:15:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:56.455 19:15:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:56.455 19:15:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:56.455 19:15:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.455 19:15:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.455 19:15:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.455 19:15:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:56.455 19:15:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:56.455 19:15:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:56.455 19:15:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:16:56.455 19:15:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:56.455 19:15:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.455 19:15:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.455 19:15:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.715 19:15:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:56.715 19:15:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:56.715 19:15:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:56.715 19:15:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.715 19:15:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.715 [2024-11-27 19:15:06.100094] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:56.715 19:15:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.715 19:15:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:16:56.715 19:15:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:16:56.715 19:15:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:56.715 19:15:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:16:56.715 19:15:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:16:56.715 19:15:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:16:56.716 19:15:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:56.716 19:15:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:56.716 19:15:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:56.716 19:15:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:56.716 19:15:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:56.716 19:15:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:56.716 19:15:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:56.716 19:15:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:56.716 19:15:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:56.716 19:15:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:56.716 19:15:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.716 19:15:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.716 19:15:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:56.716 19:15:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.716 19:15:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:56.716 "name": "Existed_Raid", 00:16:56.716 "uuid": "fc50510e-12d7-4dcb-b482-272a963a0072", 00:16:56.716 "strip_size_kb": 64, 00:16:56.716 "state": "online", 00:16:56.716 "raid_level": "raid5f", 00:16:56.716 "superblock": false, 00:16:56.716 "num_base_bdevs": 4, 00:16:56.716 "num_base_bdevs_discovered": 3, 00:16:56.716 "num_base_bdevs_operational": 3, 00:16:56.716 "base_bdevs_list": [ 00:16:56.716 { 00:16:56.716 "name": null, 00:16:56.716 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:56.716 "is_configured": false, 00:16:56.716 "data_offset": 0, 00:16:56.716 "data_size": 65536 00:16:56.716 }, 00:16:56.716 { 00:16:56.716 "name": "BaseBdev2", 00:16:56.716 "uuid": "ec1c15be-d43a-43a4-95e9-294ede198d58", 00:16:56.716 "is_configured": true, 00:16:56.716 "data_offset": 0, 00:16:56.716 "data_size": 65536 00:16:56.716 }, 00:16:56.716 { 00:16:56.716 "name": "BaseBdev3", 00:16:56.716 "uuid": "dd9a92d0-aa18-47d7-80e3-5c9085097b72", 00:16:56.716 "is_configured": true, 00:16:56.716 "data_offset": 0, 00:16:56.716 "data_size": 65536 00:16:56.716 }, 00:16:56.716 { 00:16:56.716 "name": "BaseBdev4", 00:16:56.716 "uuid": "0a20772b-f69c-4a23-8627-71a7899fd19a", 00:16:56.716 "is_configured": true, 00:16:56.716 "data_offset": 0, 00:16:56.716 "data_size": 65536 00:16:56.716 } 00:16:56.716 ] 00:16:56.716 }' 00:16:56.716 19:15:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:56.716 19:15:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.285 19:15:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:16:57.285 19:15:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:57.285 19:15:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:57.285 19:15:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.285 19:15:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:57.285 19:15:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.285 19:15:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.285 19:15:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:57.285 19:15:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:57.285 19:15:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:16:57.285 19:15:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.285 19:15:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.285 [2024-11-27 19:15:06.695892] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:57.285 [2024-11-27 19:15:06.696081] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:57.285 [2024-11-27 19:15:06.795043] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:57.285 19:15:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.285 19:15:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:57.285 19:15:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:57.285 19:15:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:57.285 19:15:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.285 19:15:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.285 19:15:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:57.285 19:15:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.285 19:15:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:57.285 19:15:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:57.285 19:15:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:16:57.285 19:15:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.285 19:15:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.285 [2024-11-27 19:15:06.854946] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:57.544 19:15:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.544 19:15:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:57.544 19:15:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:57.544 19:15:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:57.544 19:15:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.544 19:15:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:57.544 19:15:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.544 19:15:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.544 19:15:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:57.544 19:15:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:57.544 19:15:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:16:57.544 19:15:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.544 19:15:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.544 [2024-11-27 19:15:07.009887] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:16:57.544 [2024-11-27 19:15:07.010040] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:16:57.544 19:15:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.544 19:15:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:57.544 19:15:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:57.544 19:15:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:57.544 19:15:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:16:57.544 19:15:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.544 19:15:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.544 19:15:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.544 19:15:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:16:57.544 19:15:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:16:57.544 19:15:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:16:57.544 19:15:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:16:57.544 19:15:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:57.544 19:15:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:57.545 19:15:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.545 19:15:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.803 BaseBdev2 00:16:57.803 19:15:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.803 19:15:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:16:57.804 19:15:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:16:57.804 19:15:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:57.804 19:15:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:57.804 19:15:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:57.804 19:15:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:57.804 19:15:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:57.804 19:15:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.804 19:15:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.804 19:15:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.804 19:15:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:57.804 19:15:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.804 19:15:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.804 [ 00:16:57.804 { 00:16:57.804 "name": "BaseBdev2", 00:16:57.804 "aliases": [ 00:16:57.804 "f663e0f4-7da2-4df2-b84d-648d4d560ec5" 00:16:57.804 ], 00:16:57.804 "product_name": "Malloc disk", 00:16:57.804 "block_size": 512, 00:16:57.804 "num_blocks": 65536, 00:16:57.804 "uuid": "f663e0f4-7da2-4df2-b84d-648d4d560ec5", 00:16:57.804 "assigned_rate_limits": { 00:16:57.804 "rw_ios_per_sec": 0, 00:16:57.804 "rw_mbytes_per_sec": 0, 00:16:57.804 "r_mbytes_per_sec": 0, 00:16:57.804 "w_mbytes_per_sec": 0 00:16:57.804 }, 00:16:57.804 "claimed": false, 00:16:57.804 "zoned": false, 00:16:57.804 "supported_io_types": { 00:16:57.804 "read": true, 00:16:57.804 "write": true, 00:16:57.804 "unmap": true, 00:16:57.804 "flush": true, 00:16:57.804 "reset": true, 00:16:57.804 "nvme_admin": false, 00:16:57.804 "nvme_io": false, 00:16:57.804 "nvme_io_md": false, 00:16:57.804 "write_zeroes": true, 00:16:57.804 "zcopy": true, 00:16:57.804 "get_zone_info": false, 00:16:57.804 "zone_management": false, 00:16:57.804 "zone_append": false, 00:16:57.804 "compare": false, 00:16:57.804 "compare_and_write": false, 00:16:57.804 "abort": true, 00:16:57.804 "seek_hole": false, 00:16:57.804 "seek_data": false, 00:16:57.804 "copy": true, 00:16:57.804 "nvme_iov_md": false 00:16:57.804 }, 00:16:57.804 "memory_domains": [ 00:16:57.804 { 00:16:57.804 "dma_device_id": "system", 00:16:57.804 "dma_device_type": 1 00:16:57.804 }, 00:16:57.804 { 00:16:57.804 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:57.804 "dma_device_type": 2 00:16:57.804 } 00:16:57.804 ], 00:16:57.804 "driver_specific": {} 00:16:57.804 } 00:16:57.804 ] 00:16:57.804 19:15:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.804 19:15:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:57.804 19:15:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:57.804 19:15:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:57.804 19:15:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:57.804 19:15:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.804 19:15:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.804 BaseBdev3 00:16:57.804 19:15:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.804 19:15:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:16:57.804 19:15:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:16:57.804 19:15:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:57.804 19:15:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:57.804 19:15:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:57.804 19:15:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:57.804 19:15:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:57.804 19:15:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.804 19:15:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.804 19:15:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.804 19:15:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:57.804 19:15:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.804 19:15:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.804 [ 00:16:57.804 { 00:16:57.804 "name": "BaseBdev3", 00:16:57.804 "aliases": [ 00:16:57.804 "6db384d1-47fd-4516-9cc8-17cf6f32cb0b" 00:16:57.804 ], 00:16:57.804 "product_name": "Malloc disk", 00:16:57.804 "block_size": 512, 00:16:57.804 "num_blocks": 65536, 00:16:57.804 "uuid": "6db384d1-47fd-4516-9cc8-17cf6f32cb0b", 00:16:57.804 "assigned_rate_limits": { 00:16:57.804 "rw_ios_per_sec": 0, 00:16:57.804 "rw_mbytes_per_sec": 0, 00:16:57.804 "r_mbytes_per_sec": 0, 00:16:57.804 "w_mbytes_per_sec": 0 00:16:57.804 }, 00:16:57.804 "claimed": false, 00:16:57.804 "zoned": false, 00:16:57.804 "supported_io_types": { 00:16:57.804 "read": true, 00:16:57.804 "write": true, 00:16:57.804 "unmap": true, 00:16:57.804 "flush": true, 00:16:57.804 "reset": true, 00:16:57.804 "nvme_admin": false, 00:16:57.804 "nvme_io": false, 00:16:57.804 "nvme_io_md": false, 00:16:57.804 "write_zeroes": true, 00:16:57.804 "zcopy": true, 00:16:57.804 "get_zone_info": false, 00:16:57.804 "zone_management": false, 00:16:57.804 "zone_append": false, 00:16:57.804 "compare": false, 00:16:57.804 "compare_and_write": false, 00:16:57.804 "abort": true, 00:16:57.804 "seek_hole": false, 00:16:57.804 "seek_data": false, 00:16:57.804 "copy": true, 00:16:57.804 "nvme_iov_md": false 00:16:57.804 }, 00:16:57.804 "memory_domains": [ 00:16:57.804 { 00:16:57.804 "dma_device_id": "system", 00:16:57.804 "dma_device_type": 1 00:16:57.804 }, 00:16:57.804 { 00:16:57.804 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:57.804 "dma_device_type": 2 00:16:57.804 } 00:16:57.804 ], 00:16:57.804 "driver_specific": {} 00:16:57.804 } 00:16:57.804 ] 00:16:57.804 19:15:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.804 19:15:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:57.804 19:15:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:57.804 19:15:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:57.804 19:15:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:16:57.804 19:15:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.804 19:15:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.804 BaseBdev4 00:16:57.804 19:15:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.804 19:15:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:16:57.804 19:15:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:16:57.804 19:15:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:57.804 19:15:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:57.804 19:15:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:57.804 19:15:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:57.804 19:15:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:57.804 19:15:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.804 19:15:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.804 19:15:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.804 19:15:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:57.804 19:15:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.804 19:15:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.804 [ 00:16:57.804 { 00:16:57.804 "name": "BaseBdev4", 00:16:57.804 "aliases": [ 00:16:57.804 "ebb5b429-45d7-430c-9935-ad226bacae21" 00:16:57.804 ], 00:16:57.804 "product_name": "Malloc disk", 00:16:57.804 "block_size": 512, 00:16:57.804 "num_blocks": 65536, 00:16:57.804 "uuid": "ebb5b429-45d7-430c-9935-ad226bacae21", 00:16:57.804 "assigned_rate_limits": { 00:16:57.804 "rw_ios_per_sec": 0, 00:16:57.804 "rw_mbytes_per_sec": 0, 00:16:57.804 "r_mbytes_per_sec": 0, 00:16:57.804 "w_mbytes_per_sec": 0 00:16:57.804 }, 00:16:57.804 "claimed": false, 00:16:57.804 "zoned": false, 00:16:57.804 "supported_io_types": { 00:16:57.804 "read": true, 00:16:57.804 "write": true, 00:16:57.804 "unmap": true, 00:16:57.804 "flush": true, 00:16:57.804 "reset": true, 00:16:57.804 "nvme_admin": false, 00:16:57.804 "nvme_io": false, 00:16:57.804 "nvme_io_md": false, 00:16:57.804 "write_zeroes": true, 00:16:57.804 "zcopy": true, 00:16:57.804 "get_zone_info": false, 00:16:57.804 "zone_management": false, 00:16:57.804 "zone_append": false, 00:16:57.804 "compare": false, 00:16:57.804 "compare_and_write": false, 00:16:57.804 "abort": true, 00:16:57.804 "seek_hole": false, 00:16:57.804 "seek_data": false, 00:16:57.804 "copy": true, 00:16:57.805 "nvme_iov_md": false 00:16:57.805 }, 00:16:57.805 "memory_domains": [ 00:16:57.805 { 00:16:57.805 "dma_device_id": "system", 00:16:57.805 "dma_device_type": 1 00:16:57.805 }, 00:16:57.805 { 00:16:57.805 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:57.805 "dma_device_type": 2 00:16:57.805 } 00:16:57.805 ], 00:16:57.805 "driver_specific": {} 00:16:57.805 } 00:16:57.805 ] 00:16:57.805 19:15:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.805 19:15:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:57.805 19:15:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:57.805 19:15:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:57.805 19:15:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:57.805 19:15:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.805 19:15:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.805 [2024-11-27 19:15:07.431958] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:57.805 [2024-11-27 19:15:07.432097] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:57.805 [2024-11-27 19:15:07.432142] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:57.805 [2024-11-27 19:15:07.434250] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:57.805 [2024-11-27 19:15:07.434367] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:57.805 19:15:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.805 19:15:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:57.805 19:15:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:57.805 19:15:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:57.805 19:15:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:57.805 19:15:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:58.067 19:15:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:58.067 19:15:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:58.067 19:15:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:58.067 19:15:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:58.067 19:15:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:58.067 19:15:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:58.067 19:15:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:58.067 19:15:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.067 19:15:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.067 19:15:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.067 19:15:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:58.067 "name": "Existed_Raid", 00:16:58.067 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:58.067 "strip_size_kb": 64, 00:16:58.067 "state": "configuring", 00:16:58.067 "raid_level": "raid5f", 00:16:58.067 "superblock": false, 00:16:58.067 "num_base_bdevs": 4, 00:16:58.067 "num_base_bdevs_discovered": 3, 00:16:58.067 "num_base_bdevs_operational": 4, 00:16:58.067 "base_bdevs_list": [ 00:16:58.067 { 00:16:58.067 "name": "BaseBdev1", 00:16:58.067 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:58.067 "is_configured": false, 00:16:58.067 "data_offset": 0, 00:16:58.067 "data_size": 0 00:16:58.067 }, 00:16:58.067 { 00:16:58.067 "name": "BaseBdev2", 00:16:58.067 "uuid": "f663e0f4-7da2-4df2-b84d-648d4d560ec5", 00:16:58.067 "is_configured": true, 00:16:58.067 "data_offset": 0, 00:16:58.067 "data_size": 65536 00:16:58.067 }, 00:16:58.067 { 00:16:58.067 "name": "BaseBdev3", 00:16:58.067 "uuid": "6db384d1-47fd-4516-9cc8-17cf6f32cb0b", 00:16:58.067 "is_configured": true, 00:16:58.067 "data_offset": 0, 00:16:58.067 "data_size": 65536 00:16:58.067 }, 00:16:58.067 { 00:16:58.067 "name": "BaseBdev4", 00:16:58.067 "uuid": "ebb5b429-45d7-430c-9935-ad226bacae21", 00:16:58.067 "is_configured": true, 00:16:58.067 "data_offset": 0, 00:16:58.067 "data_size": 65536 00:16:58.067 } 00:16:58.067 ] 00:16:58.067 }' 00:16:58.067 19:15:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:58.067 19:15:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.350 19:15:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:16:58.350 19:15:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.350 19:15:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.350 [2024-11-27 19:15:07.927272] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:58.350 19:15:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.350 19:15:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:58.350 19:15:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:58.350 19:15:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:58.350 19:15:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:58.350 19:15:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:58.350 19:15:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:58.350 19:15:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:58.350 19:15:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:58.350 19:15:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:58.350 19:15:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:58.350 19:15:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:58.350 19:15:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:58.350 19:15:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.350 19:15:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.350 19:15:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.350 19:15:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:58.350 "name": "Existed_Raid", 00:16:58.350 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:58.351 "strip_size_kb": 64, 00:16:58.351 "state": "configuring", 00:16:58.351 "raid_level": "raid5f", 00:16:58.351 "superblock": false, 00:16:58.351 "num_base_bdevs": 4, 00:16:58.351 "num_base_bdevs_discovered": 2, 00:16:58.351 "num_base_bdevs_operational": 4, 00:16:58.351 "base_bdevs_list": [ 00:16:58.351 { 00:16:58.351 "name": "BaseBdev1", 00:16:58.351 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:58.351 "is_configured": false, 00:16:58.351 "data_offset": 0, 00:16:58.351 "data_size": 0 00:16:58.351 }, 00:16:58.351 { 00:16:58.351 "name": null, 00:16:58.351 "uuid": "f663e0f4-7da2-4df2-b84d-648d4d560ec5", 00:16:58.351 "is_configured": false, 00:16:58.351 "data_offset": 0, 00:16:58.351 "data_size": 65536 00:16:58.351 }, 00:16:58.351 { 00:16:58.351 "name": "BaseBdev3", 00:16:58.351 "uuid": "6db384d1-47fd-4516-9cc8-17cf6f32cb0b", 00:16:58.351 "is_configured": true, 00:16:58.351 "data_offset": 0, 00:16:58.351 "data_size": 65536 00:16:58.351 }, 00:16:58.351 { 00:16:58.351 "name": "BaseBdev4", 00:16:58.351 "uuid": "ebb5b429-45d7-430c-9935-ad226bacae21", 00:16:58.351 "is_configured": true, 00:16:58.351 "data_offset": 0, 00:16:58.351 "data_size": 65536 00:16:58.351 } 00:16:58.351 ] 00:16:58.351 }' 00:16:58.351 19:15:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:58.351 19:15:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.919 19:15:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:58.919 19:15:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:58.919 19:15:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.919 19:15:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.919 19:15:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.919 19:15:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:16:58.919 19:15:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:58.919 19:15:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.919 19:15:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.919 [2024-11-27 19:15:08.484005] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:58.919 BaseBdev1 00:16:58.919 19:15:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.919 19:15:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:16:58.919 19:15:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:16:58.919 19:15:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:58.919 19:15:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:58.919 19:15:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:58.919 19:15:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:58.919 19:15:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:58.919 19:15:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.919 19:15:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.919 19:15:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.919 19:15:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:58.919 19:15:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.919 19:15:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.919 [ 00:16:58.919 { 00:16:58.919 "name": "BaseBdev1", 00:16:58.919 "aliases": [ 00:16:58.919 "3ec7d129-36e4-4c97-80de-6cf757131718" 00:16:58.919 ], 00:16:58.919 "product_name": "Malloc disk", 00:16:58.919 "block_size": 512, 00:16:58.919 "num_blocks": 65536, 00:16:58.919 "uuid": "3ec7d129-36e4-4c97-80de-6cf757131718", 00:16:58.919 "assigned_rate_limits": { 00:16:58.919 "rw_ios_per_sec": 0, 00:16:58.919 "rw_mbytes_per_sec": 0, 00:16:58.919 "r_mbytes_per_sec": 0, 00:16:58.919 "w_mbytes_per_sec": 0 00:16:58.919 }, 00:16:58.919 "claimed": true, 00:16:58.919 "claim_type": "exclusive_write", 00:16:58.919 "zoned": false, 00:16:58.919 "supported_io_types": { 00:16:58.919 "read": true, 00:16:58.919 "write": true, 00:16:58.919 "unmap": true, 00:16:58.919 "flush": true, 00:16:58.919 "reset": true, 00:16:58.919 "nvme_admin": false, 00:16:58.919 "nvme_io": false, 00:16:58.919 "nvme_io_md": false, 00:16:58.920 "write_zeroes": true, 00:16:58.920 "zcopy": true, 00:16:58.920 "get_zone_info": false, 00:16:58.920 "zone_management": false, 00:16:58.920 "zone_append": false, 00:16:58.920 "compare": false, 00:16:58.920 "compare_and_write": false, 00:16:58.920 "abort": true, 00:16:58.920 "seek_hole": false, 00:16:58.920 "seek_data": false, 00:16:58.920 "copy": true, 00:16:58.920 "nvme_iov_md": false 00:16:58.920 }, 00:16:58.920 "memory_domains": [ 00:16:58.920 { 00:16:58.920 "dma_device_id": "system", 00:16:58.920 "dma_device_type": 1 00:16:58.920 }, 00:16:58.920 { 00:16:58.920 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:58.920 "dma_device_type": 2 00:16:58.920 } 00:16:58.920 ], 00:16:58.920 "driver_specific": {} 00:16:58.920 } 00:16:58.920 ] 00:16:58.920 19:15:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.920 19:15:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:58.920 19:15:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:58.920 19:15:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:58.920 19:15:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:58.920 19:15:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:58.920 19:15:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:58.920 19:15:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:58.920 19:15:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:58.920 19:15:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:58.920 19:15:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:58.920 19:15:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:58.920 19:15:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:58.920 19:15:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:58.920 19:15:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.920 19:15:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.920 19:15:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.178 19:15:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:59.178 "name": "Existed_Raid", 00:16:59.178 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:59.178 "strip_size_kb": 64, 00:16:59.178 "state": "configuring", 00:16:59.178 "raid_level": "raid5f", 00:16:59.178 "superblock": false, 00:16:59.178 "num_base_bdevs": 4, 00:16:59.178 "num_base_bdevs_discovered": 3, 00:16:59.178 "num_base_bdevs_operational": 4, 00:16:59.178 "base_bdevs_list": [ 00:16:59.178 { 00:16:59.178 "name": "BaseBdev1", 00:16:59.178 "uuid": "3ec7d129-36e4-4c97-80de-6cf757131718", 00:16:59.178 "is_configured": true, 00:16:59.178 "data_offset": 0, 00:16:59.178 "data_size": 65536 00:16:59.178 }, 00:16:59.178 { 00:16:59.178 "name": null, 00:16:59.178 "uuid": "f663e0f4-7da2-4df2-b84d-648d4d560ec5", 00:16:59.178 "is_configured": false, 00:16:59.178 "data_offset": 0, 00:16:59.178 "data_size": 65536 00:16:59.178 }, 00:16:59.178 { 00:16:59.178 "name": "BaseBdev3", 00:16:59.178 "uuid": "6db384d1-47fd-4516-9cc8-17cf6f32cb0b", 00:16:59.178 "is_configured": true, 00:16:59.178 "data_offset": 0, 00:16:59.178 "data_size": 65536 00:16:59.178 }, 00:16:59.178 { 00:16:59.178 "name": "BaseBdev4", 00:16:59.178 "uuid": "ebb5b429-45d7-430c-9935-ad226bacae21", 00:16:59.178 "is_configured": true, 00:16:59.178 "data_offset": 0, 00:16:59.178 "data_size": 65536 00:16:59.178 } 00:16:59.178 ] 00:16:59.178 }' 00:16:59.178 19:15:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:59.178 19:15:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:59.436 19:15:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:59.436 19:15:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:59.436 19:15:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.436 19:15:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:59.436 19:15:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.436 19:15:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:16:59.436 19:15:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:16:59.436 19:15:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.436 19:15:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:59.436 [2024-11-27 19:15:08.999399] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:59.436 19:15:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.436 19:15:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:59.436 19:15:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:59.436 19:15:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:59.436 19:15:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:59.436 19:15:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:59.436 19:15:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:59.436 19:15:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:59.436 19:15:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:59.436 19:15:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:59.436 19:15:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:59.436 19:15:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:59.436 19:15:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:59.436 19:15:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.436 19:15:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:59.436 19:15:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.436 19:15:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:59.436 "name": "Existed_Raid", 00:16:59.436 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:59.436 "strip_size_kb": 64, 00:16:59.436 "state": "configuring", 00:16:59.436 "raid_level": "raid5f", 00:16:59.436 "superblock": false, 00:16:59.436 "num_base_bdevs": 4, 00:16:59.436 "num_base_bdevs_discovered": 2, 00:16:59.436 "num_base_bdevs_operational": 4, 00:16:59.436 "base_bdevs_list": [ 00:16:59.436 { 00:16:59.436 "name": "BaseBdev1", 00:16:59.436 "uuid": "3ec7d129-36e4-4c97-80de-6cf757131718", 00:16:59.436 "is_configured": true, 00:16:59.436 "data_offset": 0, 00:16:59.436 "data_size": 65536 00:16:59.436 }, 00:16:59.436 { 00:16:59.436 "name": null, 00:16:59.436 "uuid": "f663e0f4-7da2-4df2-b84d-648d4d560ec5", 00:16:59.436 "is_configured": false, 00:16:59.436 "data_offset": 0, 00:16:59.436 "data_size": 65536 00:16:59.436 }, 00:16:59.436 { 00:16:59.436 "name": null, 00:16:59.436 "uuid": "6db384d1-47fd-4516-9cc8-17cf6f32cb0b", 00:16:59.436 "is_configured": false, 00:16:59.436 "data_offset": 0, 00:16:59.436 "data_size": 65536 00:16:59.436 }, 00:16:59.436 { 00:16:59.436 "name": "BaseBdev4", 00:16:59.436 "uuid": "ebb5b429-45d7-430c-9935-ad226bacae21", 00:16:59.436 "is_configured": true, 00:16:59.436 "data_offset": 0, 00:16:59.436 "data_size": 65536 00:16:59.436 } 00:16:59.436 ] 00:16:59.436 }' 00:16:59.436 19:15:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:59.436 19:15:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:00.003 19:15:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:00.003 19:15:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.003 19:15:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:00.003 19:15:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:00.003 19:15:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.003 19:15:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:17:00.003 19:15:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:17:00.003 19:15:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.003 19:15:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:00.003 [2024-11-27 19:15:09.482599] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:00.003 19:15:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.003 19:15:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:00.003 19:15:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:00.003 19:15:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:00.003 19:15:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:00.003 19:15:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:00.003 19:15:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:00.003 19:15:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:00.003 19:15:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:00.003 19:15:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:00.003 19:15:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:00.003 19:15:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:00.003 19:15:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.003 19:15:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:00.003 19:15:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:00.003 19:15:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.003 19:15:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:00.003 "name": "Existed_Raid", 00:17:00.003 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:00.003 "strip_size_kb": 64, 00:17:00.003 "state": "configuring", 00:17:00.003 "raid_level": "raid5f", 00:17:00.003 "superblock": false, 00:17:00.003 "num_base_bdevs": 4, 00:17:00.003 "num_base_bdevs_discovered": 3, 00:17:00.003 "num_base_bdevs_operational": 4, 00:17:00.003 "base_bdevs_list": [ 00:17:00.003 { 00:17:00.003 "name": "BaseBdev1", 00:17:00.003 "uuid": "3ec7d129-36e4-4c97-80de-6cf757131718", 00:17:00.003 "is_configured": true, 00:17:00.003 "data_offset": 0, 00:17:00.003 "data_size": 65536 00:17:00.003 }, 00:17:00.003 { 00:17:00.003 "name": null, 00:17:00.003 "uuid": "f663e0f4-7da2-4df2-b84d-648d4d560ec5", 00:17:00.003 "is_configured": false, 00:17:00.003 "data_offset": 0, 00:17:00.003 "data_size": 65536 00:17:00.003 }, 00:17:00.003 { 00:17:00.003 "name": "BaseBdev3", 00:17:00.003 "uuid": "6db384d1-47fd-4516-9cc8-17cf6f32cb0b", 00:17:00.003 "is_configured": true, 00:17:00.003 "data_offset": 0, 00:17:00.003 "data_size": 65536 00:17:00.003 }, 00:17:00.003 { 00:17:00.003 "name": "BaseBdev4", 00:17:00.003 "uuid": "ebb5b429-45d7-430c-9935-ad226bacae21", 00:17:00.003 "is_configured": true, 00:17:00.003 "data_offset": 0, 00:17:00.003 "data_size": 65536 00:17:00.003 } 00:17:00.003 ] 00:17:00.003 }' 00:17:00.003 19:15:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:00.003 19:15:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:00.574 19:15:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:00.574 19:15:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.574 19:15:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:00.574 19:15:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:00.574 19:15:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.574 19:15:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:17:00.574 19:15:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:00.574 19:15:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.574 19:15:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:00.574 [2024-11-27 19:15:09.997810] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:00.574 19:15:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.574 19:15:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:00.574 19:15:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:00.574 19:15:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:00.574 19:15:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:00.574 19:15:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:00.574 19:15:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:00.574 19:15:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:00.574 19:15:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:00.574 19:15:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:00.574 19:15:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:00.574 19:15:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:00.574 19:15:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.574 19:15:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:00.574 19:15:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:00.574 19:15:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.574 19:15:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:00.574 "name": "Existed_Raid", 00:17:00.574 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:00.574 "strip_size_kb": 64, 00:17:00.574 "state": "configuring", 00:17:00.574 "raid_level": "raid5f", 00:17:00.574 "superblock": false, 00:17:00.574 "num_base_bdevs": 4, 00:17:00.574 "num_base_bdevs_discovered": 2, 00:17:00.574 "num_base_bdevs_operational": 4, 00:17:00.574 "base_bdevs_list": [ 00:17:00.574 { 00:17:00.574 "name": null, 00:17:00.574 "uuid": "3ec7d129-36e4-4c97-80de-6cf757131718", 00:17:00.574 "is_configured": false, 00:17:00.574 "data_offset": 0, 00:17:00.574 "data_size": 65536 00:17:00.574 }, 00:17:00.574 { 00:17:00.574 "name": null, 00:17:00.574 "uuid": "f663e0f4-7da2-4df2-b84d-648d4d560ec5", 00:17:00.574 "is_configured": false, 00:17:00.574 "data_offset": 0, 00:17:00.574 "data_size": 65536 00:17:00.574 }, 00:17:00.574 { 00:17:00.574 "name": "BaseBdev3", 00:17:00.574 "uuid": "6db384d1-47fd-4516-9cc8-17cf6f32cb0b", 00:17:00.574 "is_configured": true, 00:17:00.574 "data_offset": 0, 00:17:00.574 "data_size": 65536 00:17:00.574 }, 00:17:00.574 { 00:17:00.574 "name": "BaseBdev4", 00:17:00.574 "uuid": "ebb5b429-45d7-430c-9935-ad226bacae21", 00:17:00.574 "is_configured": true, 00:17:00.574 "data_offset": 0, 00:17:00.574 "data_size": 65536 00:17:00.574 } 00:17:00.574 ] 00:17:00.574 }' 00:17:00.574 19:15:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:00.574 19:15:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:01.142 19:15:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:01.142 19:15:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:01.142 19:15:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.142 19:15:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:01.142 19:15:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.142 19:15:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:17:01.142 19:15:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:17:01.142 19:15:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.142 19:15:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:01.142 [2024-11-27 19:15:10.537539] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:01.142 19:15:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.142 19:15:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:01.142 19:15:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:01.142 19:15:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:01.142 19:15:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:01.142 19:15:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:01.142 19:15:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:01.142 19:15:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:01.142 19:15:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:01.142 19:15:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:01.142 19:15:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:01.142 19:15:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:01.142 19:15:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:01.142 19:15:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.142 19:15:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:01.142 19:15:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.142 19:15:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:01.142 "name": "Existed_Raid", 00:17:01.142 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:01.142 "strip_size_kb": 64, 00:17:01.142 "state": "configuring", 00:17:01.142 "raid_level": "raid5f", 00:17:01.142 "superblock": false, 00:17:01.142 "num_base_bdevs": 4, 00:17:01.142 "num_base_bdevs_discovered": 3, 00:17:01.142 "num_base_bdevs_operational": 4, 00:17:01.142 "base_bdevs_list": [ 00:17:01.142 { 00:17:01.142 "name": null, 00:17:01.142 "uuid": "3ec7d129-36e4-4c97-80de-6cf757131718", 00:17:01.142 "is_configured": false, 00:17:01.142 "data_offset": 0, 00:17:01.142 "data_size": 65536 00:17:01.142 }, 00:17:01.142 { 00:17:01.142 "name": "BaseBdev2", 00:17:01.142 "uuid": "f663e0f4-7da2-4df2-b84d-648d4d560ec5", 00:17:01.142 "is_configured": true, 00:17:01.142 "data_offset": 0, 00:17:01.142 "data_size": 65536 00:17:01.142 }, 00:17:01.142 { 00:17:01.142 "name": "BaseBdev3", 00:17:01.142 "uuid": "6db384d1-47fd-4516-9cc8-17cf6f32cb0b", 00:17:01.142 "is_configured": true, 00:17:01.142 "data_offset": 0, 00:17:01.142 "data_size": 65536 00:17:01.142 }, 00:17:01.142 { 00:17:01.142 "name": "BaseBdev4", 00:17:01.142 "uuid": "ebb5b429-45d7-430c-9935-ad226bacae21", 00:17:01.142 "is_configured": true, 00:17:01.142 "data_offset": 0, 00:17:01.142 "data_size": 65536 00:17:01.142 } 00:17:01.142 ] 00:17:01.142 }' 00:17:01.142 19:15:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:01.142 19:15:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:01.401 19:15:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:01.401 19:15:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:01.401 19:15:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.401 19:15:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:01.401 19:15:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.401 19:15:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:17:01.401 19:15:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:01.401 19:15:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:17:01.401 19:15:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.401 19:15:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:01.662 19:15:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.662 19:15:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 3ec7d129-36e4-4c97-80de-6cf757131718 00:17:01.662 19:15:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.662 19:15:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:01.662 [2024-11-27 19:15:11.106369] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:17:01.662 [2024-11-27 19:15:11.106506] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:17:01.662 [2024-11-27 19:15:11.106531] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:17:01.662 [2024-11-27 19:15:11.106877] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:17:01.662 [2024-11-27 19:15:11.113480] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:17:01.662 [2024-11-27 19:15:11.113545] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:17:01.662 [2024-11-27 19:15:11.113856] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:01.662 NewBaseBdev 00:17:01.662 19:15:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.662 19:15:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:17:01.662 19:15:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:17:01.662 19:15:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:01.662 19:15:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:17:01.662 19:15:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:01.662 19:15:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:01.662 19:15:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:01.662 19:15:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.662 19:15:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:01.662 19:15:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.662 19:15:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:17:01.662 19:15:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.662 19:15:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:01.662 [ 00:17:01.662 { 00:17:01.662 "name": "NewBaseBdev", 00:17:01.662 "aliases": [ 00:17:01.662 "3ec7d129-36e4-4c97-80de-6cf757131718" 00:17:01.662 ], 00:17:01.662 "product_name": "Malloc disk", 00:17:01.662 "block_size": 512, 00:17:01.662 "num_blocks": 65536, 00:17:01.662 "uuid": "3ec7d129-36e4-4c97-80de-6cf757131718", 00:17:01.662 "assigned_rate_limits": { 00:17:01.662 "rw_ios_per_sec": 0, 00:17:01.662 "rw_mbytes_per_sec": 0, 00:17:01.662 "r_mbytes_per_sec": 0, 00:17:01.662 "w_mbytes_per_sec": 0 00:17:01.662 }, 00:17:01.662 "claimed": true, 00:17:01.662 "claim_type": "exclusive_write", 00:17:01.662 "zoned": false, 00:17:01.662 "supported_io_types": { 00:17:01.662 "read": true, 00:17:01.662 "write": true, 00:17:01.662 "unmap": true, 00:17:01.662 "flush": true, 00:17:01.662 "reset": true, 00:17:01.662 "nvme_admin": false, 00:17:01.662 "nvme_io": false, 00:17:01.662 "nvme_io_md": false, 00:17:01.662 "write_zeroes": true, 00:17:01.662 "zcopy": true, 00:17:01.662 "get_zone_info": false, 00:17:01.662 "zone_management": false, 00:17:01.662 "zone_append": false, 00:17:01.662 "compare": false, 00:17:01.662 "compare_and_write": false, 00:17:01.662 "abort": true, 00:17:01.662 "seek_hole": false, 00:17:01.662 "seek_data": false, 00:17:01.662 "copy": true, 00:17:01.662 "nvme_iov_md": false 00:17:01.662 }, 00:17:01.662 "memory_domains": [ 00:17:01.662 { 00:17:01.662 "dma_device_id": "system", 00:17:01.662 "dma_device_type": 1 00:17:01.662 }, 00:17:01.662 { 00:17:01.662 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:01.662 "dma_device_type": 2 00:17:01.662 } 00:17:01.662 ], 00:17:01.662 "driver_specific": {} 00:17:01.662 } 00:17:01.662 ] 00:17:01.662 19:15:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.662 19:15:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:17:01.662 19:15:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:17:01.662 19:15:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:01.662 19:15:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:01.662 19:15:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:01.662 19:15:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:01.662 19:15:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:01.662 19:15:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:01.662 19:15:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:01.662 19:15:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:01.662 19:15:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:01.662 19:15:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:01.662 19:15:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:01.662 19:15:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.662 19:15:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:01.662 19:15:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.663 19:15:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:01.663 "name": "Existed_Raid", 00:17:01.663 "uuid": "00a7c70e-2273-4bcf-a6e1-e6587c0fe6dc", 00:17:01.663 "strip_size_kb": 64, 00:17:01.663 "state": "online", 00:17:01.663 "raid_level": "raid5f", 00:17:01.663 "superblock": false, 00:17:01.663 "num_base_bdevs": 4, 00:17:01.663 "num_base_bdevs_discovered": 4, 00:17:01.663 "num_base_bdevs_operational": 4, 00:17:01.663 "base_bdevs_list": [ 00:17:01.663 { 00:17:01.663 "name": "NewBaseBdev", 00:17:01.663 "uuid": "3ec7d129-36e4-4c97-80de-6cf757131718", 00:17:01.663 "is_configured": true, 00:17:01.663 "data_offset": 0, 00:17:01.663 "data_size": 65536 00:17:01.663 }, 00:17:01.663 { 00:17:01.663 "name": "BaseBdev2", 00:17:01.663 "uuid": "f663e0f4-7da2-4df2-b84d-648d4d560ec5", 00:17:01.663 "is_configured": true, 00:17:01.663 "data_offset": 0, 00:17:01.663 "data_size": 65536 00:17:01.663 }, 00:17:01.663 { 00:17:01.663 "name": "BaseBdev3", 00:17:01.663 "uuid": "6db384d1-47fd-4516-9cc8-17cf6f32cb0b", 00:17:01.663 "is_configured": true, 00:17:01.663 "data_offset": 0, 00:17:01.663 "data_size": 65536 00:17:01.663 }, 00:17:01.663 { 00:17:01.663 "name": "BaseBdev4", 00:17:01.663 "uuid": "ebb5b429-45d7-430c-9935-ad226bacae21", 00:17:01.663 "is_configured": true, 00:17:01.663 "data_offset": 0, 00:17:01.663 "data_size": 65536 00:17:01.663 } 00:17:01.663 ] 00:17:01.663 }' 00:17:01.663 19:15:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:01.663 19:15:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:02.233 19:15:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:17:02.233 19:15:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:02.233 19:15:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:02.233 19:15:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:02.233 19:15:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:17:02.233 19:15:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:02.233 19:15:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:02.233 19:15:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:02.233 19:15:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.233 19:15:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:02.233 [2024-11-27 19:15:11.590284] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:02.233 19:15:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.233 19:15:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:02.233 "name": "Existed_Raid", 00:17:02.233 "aliases": [ 00:17:02.233 "00a7c70e-2273-4bcf-a6e1-e6587c0fe6dc" 00:17:02.233 ], 00:17:02.233 "product_name": "Raid Volume", 00:17:02.233 "block_size": 512, 00:17:02.233 "num_blocks": 196608, 00:17:02.233 "uuid": "00a7c70e-2273-4bcf-a6e1-e6587c0fe6dc", 00:17:02.233 "assigned_rate_limits": { 00:17:02.233 "rw_ios_per_sec": 0, 00:17:02.233 "rw_mbytes_per_sec": 0, 00:17:02.233 "r_mbytes_per_sec": 0, 00:17:02.233 "w_mbytes_per_sec": 0 00:17:02.233 }, 00:17:02.233 "claimed": false, 00:17:02.233 "zoned": false, 00:17:02.233 "supported_io_types": { 00:17:02.233 "read": true, 00:17:02.233 "write": true, 00:17:02.233 "unmap": false, 00:17:02.233 "flush": false, 00:17:02.233 "reset": true, 00:17:02.233 "nvme_admin": false, 00:17:02.233 "nvme_io": false, 00:17:02.233 "nvme_io_md": false, 00:17:02.233 "write_zeroes": true, 00:17:02.233 "zcopy": false, 00:17:02.233 "get_zone_info": false, 00:17:02.233 "zone_management": false, 00:17:02.233 "zone_append": false, 00:17:02.233 "compare": false, 00:17:02.233 "compare_and_write": false, 00:17:02.233 "abort": false, 00:17:02.233 "seek_hole": false, 00:17:02.233 "seek_data": false, 00:17:02.233 "copy": false, 00:17:02.233 "nvme_iov_md": false 00:17:02.233 }, 00:17:02.233 "driver_specific": { 00:17:02.233 "raid": { 00:17:02.233 "uuid": "00a7c70e-2273-4bcf-a6e1-e6587c0fe6dc", 00:17:02.233 "strip_size_kb": 64, 00:17:02.233 "state": "online", 00:17:02.233 "raid_level": "raid5f", 00:17:02.233 "superblock": false, 00:17:02.233 "num_base_bdevs": 4, 00:17:02.233 "num_base_bdevs_discovered": 4, 00:17:02.233 "num_base_bdevs_operational": 4, 00:17:02.233 "base_bdevs_list": [ 00:17:02.233 { 00:17:02.233 "name": "NewBaseBdev", 00:17:02.233 "uuid": "3ec7d129-36e4-4c97-80de-6cf757131718", 00:17:02.233 "is_configured": true, 00:17:02.233 "data_offset": 0, 00:17:02.233 "data_size": 65536 00:17:02.233 }, 00:17:02.233 { 00:17:02.233 "name": "BaseBdev2", 00:17:02.233 "uuid": "f663e0f4-7da2-4df2-b84d-648d4d560ec5", 00:17:02.233 "is_configured": true, 00:17:02.233 "data_offset": 0, 00:17:02.233 "data_size": 65536 00:17:02.233 }, 00:17:02.233 { 00:17:02.233 "name": "BaseBdev3", 00:17:02.233 "uuid": "6db384d1-47fd-4516-9cc8-17cf6f32cb0b", 00:17:02.233 "is_configured": true, 00:17:02.233 "data_offset": 0, 00:17:02.233 "data_size": 65536 00:17:02.233 }, 00:17:02.233 { 00:17:02.233 "name": "BaseBdev4", 00:17:02.233 "uuid": "ebb5b429-45d7-430c-9935-ad226bacae21", 00:17:02.233 "is_configured": true, 00:17:02.233 "data_offset": 0, 00:17:02.233 "data_size": 65536 00:17:02.233 } 00:17:02.233 ] 00:17:02.233 } 00:17:02.233 } 00:17:02.233 }' 00:17:02.233 19:15:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:02.233 19:15:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:17:02.233 BaseBdev2 00:17:02.233 BaseBdev3 00:17:02.233 BaseBdev4' 00:17:02.233 19:15:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:02.233 19:15:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:02.233 19:15:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:02.233 19:15:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:17:02.233 19:15:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.233 19:15:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:02.233 19:15:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:02.233 19:15:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.233 19:15:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:02.233 19:15:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:02.233 19:15:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:02.233 19:15:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:02.233 19:15:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:02.233 19:15:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.233 19:15:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:02.233 19:15:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.233 19:15:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:02.233 19:15:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:02.233 19:15:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:02.233 19:15:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:17:02.233 19:15:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:02.233 19:15:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.233 19:15:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:02.233 19:15:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.233 19:15:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:02.233 19:15:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:02.233 19:15:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:02.233 19:15:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:17:02.233 19:15:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.233 19:15:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:02.233 19:15:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:02.233 19:15:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.494 19:15:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:02.494 19:15:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:02.494 19:15:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:02.494 19:15:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.494 19:15:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:02.494 [2024-11-27 19:15:11.893536] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:02.494 [2024-11-27 19:15:11.893611] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:02.494 [2024-11-27 19:15:11.893719] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:02.494 [2024-11-27 19:15:11.894079] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:02.494 [2024-11-27 19:15:11.894125] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:17:02.494 19:15:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.494 19:15:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 82839 00:17:02.494 19:15:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 82839 ']' 00:17:02.494 19:15:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # kill -0 82839 00:17:02.494 19:15:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # uname 00:17:02.494 19:15:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:02.494 19:15:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82839 00:17:02.494 19:15:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:02.494 killing process with pid 82839 00:17:02.494 19:15:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:02.494 19:15:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82839' 00:17:02.494 19:15:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@973 -- # kill 82839 00:17:02.494 [2024-11-27 19:15:11.942262] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:02.494 19:15:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@978 -- # wait 82839 00:17:02.754 [2024-11-27 19:15:12.362563] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:04.135 19:15:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:17:04.135 00:17:04.135 real 0m11.788s 00:17:04.135 user 0m18.411s 00:17:04.135 sys 0m2.316s 00:17:04.135 ************************************ 00:17:04.135 END TEST raid5f_state_function_test 00:17:04.135 ************************************ 00:17:04.135 19:15:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:04.135 19:15:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:04.135 19:15:13 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 4 true 00:17:04.135 19:15:13 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:17:04.135 19:15:13 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:04.135 19:15:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:04.135 ************************************ 00:17:04.135 START TEST raid5f_state_function_test_sb 00:17:04.135 ************************************ 00:17:04.135 19:15:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 4 true 00:17:04.135 19:15:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:17:04.135 19:15:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:17:04.135 19:15:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:17:04.135 19:15:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:17:04.135 19:15:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:17:04.135 19:15:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:04.135 19:15:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:17:04.135 19:15:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:04.135 19:15:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:04.135 19:15:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:17:04.135 19:15:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:04.135 19:15:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:04.135 19:15:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:17:04.135 19:15:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:04.135 19:15:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:04.135 19:15:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:17:04.135 19:15:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:04.135 19:15:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:04.135 19:15:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:17:04.135 19:15:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:17:04.135 19:15:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:17:04.135 19:15:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:17:04.135 19:15:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:17:04.135 19:15:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:17:04.135 19:15:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:17:04.135 19:15:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:17:04.135 19:15:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:17:04.135 19:15:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:17:04.135 19:15:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:17:04.135 19:15:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=83514 00:17:04.135 19:15:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 83514' 00:17:04.135 19:15:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:17:04.135 Process raid pid: 83514 00:17:04.135 19:15:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 83514 00:17:04.135 19:15:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 83514 ']' 00:17:04.135 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:04.135 19:15:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:04.135 19:15:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:04.135 19:15:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:04.136 19:15:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:04.136 19:15:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.136 [2024-11-27 19:15:13.752667] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:17:04.136 [2024-11-27 19:15:13.752794] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:04.395 [2024-11-27 19:15:13.926799] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:04.655 [2024-11-27 19:15:14.069017] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:04.915 [2024-11-27 19:15:14.303852] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:04.915 [2024-11-27 19:15:14.303966] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:05.175 19:15:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:05.175 19:15:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:17:05.175 19:15:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:05.175 19:15:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.175 19:15:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.175 [2024-11-27 19:15:14.568135] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:05.175 [2024-11-27 19:15:14.568271] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:05.175 [2024-11-27 19:15:14.568301] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:05.175 [2024-11-27 19:15:14.568325] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:05.175 [2024-11-27 19:15:14.568344] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:05.175 [2024-11-27 19:15:14.568365] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:05.175 [2024-11-27 19:15:14.568382] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:05.175 [2024-11-27 19:15:14.568403] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:05.175 19:15:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.175 19:15:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:05.175 19:15:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:05.175 19:15:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:05.175 19:15:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:05.175 19:15:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:05.175 19:15:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:05.175 19:15:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:05.175 19:15:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:05.175 19:15:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:05.175 19:15:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:05.175 19:15:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:05.175 19:15:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:05.175 19:15:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.175 19:15:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.175 19:15:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.175 19:15:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:05.175 "name": "Existed_Raid", 00:17:05.175 "uuid": "25a9a5d9-84eb-48ba-99ba-66d25169d640", 00:17:05.175 "strip_size_kb": 64, 00:17:05.175 "state": "configuring", 00:17:05.175 "raid_level": "raid5f", 00:17:05.175 "superblock": true, 00:17:05.175 "num_base_bdevs": 4, 00:17:05.175 "num_base_bdevs_discovered": 0, 00:17:05.175 "num_base_bdevs_operational": 4, 00:17:05.175 "base_bdevs_list": [ 00:17:05.175 { 00:17:05.175 "name": "BaseBdev1", 00:17:05.175 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:05.175 "is_configured": false, 00:17:05.175 "data_offset": 0, 00:17:05.175 "data_size": 0 00:17:05.175 }, 00:17:05.175 { 00:17:05.175 "name": "BaseBdev2", 00:17:05.175 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:05.175 "is_configured": false, 00:17:05.175 "data_offset": 0, 00:17:05.175 "data_size": 0 00:17:05.175 }, 00:17:05.175 { 00:17:05.175 "name": "BaseBdev3", 00:17:05.175 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:05.175 "is_configured": false, 00:17:05.175 "data_offset": 0, 00:17:05.175 "data_size": 0 00:17:05.175 }, 00:17:05.175 { 00:17:05.175 "name": "BaseBdev4", 00:17:05.175 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:05.175 "is_configured": false, 00:17:05.175 "data_offset": 0, 00:17:05.175 "data_size": 0 00:17:05.175 } 00:17:05.175 ] 00:17:05.175 }' 00:17:05.175 19:15:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:05.176 19:15:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.435 19:15:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:05.435 19:15:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.435 19:15:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.435 [2024-11-27 19:15:15.043451] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:05.435 [2024-11-27 19:15:15.043561] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:17:05.435 19:15:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.435 19:15:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:05.435 19:15:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.435 19:15:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.435 [2024-11-27 19:15:15.051449] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:05.435 [2024-11-27 19:15:15.051529] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:05.435 [2024-11-27 19:15:15.051555] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:05.435 [2024-11-27 19:15:15.051576] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:05.435 [2024-11-27 19:15:15.051592] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:05.435 [2024-11-27 19:15:15.051612] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:05.435 [2024-11-27 19:15:15.051628] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:05.435 [2024-11-27 19:15:15.051664] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:05.435 19:15:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.435 19:15:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:17:05.435 19:15:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.435 19:15:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.695 [2024-11-27 19:15:15.101459] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:05.695 BaseBdev1 00:17:05.695 19:15:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.695 19:15:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:17:05.695 19:15:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:17:05.695 19:15:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:05.695 19:15:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:17:05.695 19:15:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:05.695 19:15:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:05.695 19:15:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:05.695 19:15:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.695 19:15:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.695 19:15:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.695 19:15:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:05.695 19:15:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.695 19:15:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.695 [ 00:17:05.695 { 00:17:05.695 "name": "BaseBdev1", 00:17:05.695 "aliases": [ 00:17:05.695 "b6830b76-04b7-4e61-bed1-16de51e81e56" 00:17:05.695 ], 00:17:05.695 "product_name": "Malloc disk", 00:17:05.695 "block_size": 512, 00:17:05.695 "num_blocks": 65536, 00:17:05.695 "uuid": "b6830b76-04b7-4e61-bed1-16de51e81e56", 00:17:05.695 "assigned_rate_limits": { 00:17:05.695 "rw_ios_per_sec": 0, 00:17:05.695 "rw_mbytes_per_sec": 0, 00:17:05.695 "r_mbytes_per_sec": 0, 00:17:05.695 "w_mbytes_per_sec": 0 00:17:05.695 }, 00:17:05.695 "claimed": true, 00:17:05.695 "claim_type": "exclusive_write", 00:17:05.695 "zoned": false, 00:17:05.695 "supported_io_types": { 00:17:05.695 "read": true, 00:17:05.695 "write": true, 00:17:05.695 "unmap": true, 00:17:05.695 "flush": true, 00:17:05.695 "reset": true, 00:17:05.695 "nvme_admin": false, 00:17:05.695 "nvme_io": false, 00:17:05.695 "nvme_io_md": false, 00:17:05.695 "write_zeroes": true, 00:17:05.695 "zcopy": true, 00:17:05.695 "get_zone_info": false, 00:17:05.695 "zone_management": false, 00:17:05.696 "zone_append": false, 00:17:05.696 "compare": false, 00:17:05.696 "compare_and_write": false, 00:17:05.696 "abort": true, 00:17:05.696 "seek_hole": false, 00:17:05.696 "seek_data": false, 00:17:05.696 "copy": true, 00:17:05.696 "nvme_iov_md": false 00:17:05.696 }, 00:17:05.696 "memory_domains": [ 00:17:05.696 { 00:17:05.696 "dma_device_id": "system", 00:17:05.696 "dma_device_type": 1 00:17:05.696 }, 00:17:05.696 { 00:17:05.696 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:05.696 "dma_device_type": 2 00:17:05.696 } 00:17:05.696 ], 00:17:05.696 "driver_specific": {} 00:17:05.696 } 00:17:05.696 ] 00:17:05.696 19:15:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.696 19:15:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:17:05.696 19:15:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:05.696 19:15:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:05.696 19:15:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:05.696 19:15:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:05.696 19:15:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:05.696 19:15:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:05.696 19:15:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:05.696 19:15:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:05.696 19:15:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:05.696 19:15:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:05.696 19:15:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:05.696 19:15:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:05.696 19:15:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.696 19:15:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.696 19:15:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.696 19:15:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:05.696 "name": "Existed_Raid", 00:17:05.696 "uuid": "7e17bc1e-a981-4831-8d6a-379202ac3383", 00:17:05.696 "strip_size_kb": 64, 00:17:05.696 "state": "configuring", 00:17:05.696 "raid_level": "raid5f", 00:17:05.696 "superblock": true, 00:17:05.696 "num_base_bdevs": 4, 00:17:05.696 "num_base_bdevs_discovered": 1, 00:17:05.696 "num_base_bdevs_operational": 4, 00:17:05.696 "base_bdevs_list": [ 00:17:05.696 { 00:17:05.696 "name": "BaseBdev1", 00:17:05.696 "uuid": "b6830b76-04b7-4e61-bed1-16de51e81e56", 00:17:05.696 "is_configured": true, 00:17:05.696 "data_offset": 2048, 00:17:05.696 "data_size": 63488 00:17:05.696 }, 00:17:05.696 { 00:17:05.696 "name": "BaseBdev2", 00:17:05.696 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:05.696 "is_configured": false, 00:17:05.696 "data_offset": 0, 00:17:05.696 "data_size": 0 00:17:05.696 }, 00:17:05.696 { 00:17:05.696 "name": "BaseBdev3", 00:17:05.696 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:05.696 "is_configured": false, 00:17:05.696 "data_offset": 0, 00:17:05.696 "data_size": 0 00:17:05.696 }, 00:17:05.696 { 00:17:05.696 "name": "BaseBdev4", 00:17:05.696 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:05.696 "is_configured": false, 00:17:05.696 "data_offset": 0, 00:17:05.696 "data_size": 0 00:17:05.696 } 00:17:05.696 ] 00:17:05.696 }' 00:17:05.696 19:15:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:05.696 19:15:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.955 19:15:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:05.955 19:15:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.955 19:15:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.955 [2024-11-27 19:15:15.584648] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:05.955 [2024-11-27 19:15:15.584777] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:17:05.955 19:15:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.955 19:15:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:05.955 19:15:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.214 19:15:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.214 [2024-11-27 19:15:15.596699] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:06.214 [2024-11-27 19:15:15.598915] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:06.214 [2024-11-27 19:15:15.599018] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:06.214 [2024-11-27 19:15:15.599047] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:06.214 [2024-11-27 19:15:15.599072] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:06.214 [2024-11-27 19:15:15.599091] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:06.214 [2024-11-27 19:15:15.599111] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:06.214 19:15:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.214 19:15:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:17:06.214 19:15:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:06.214 19:15:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:06.214 19:15:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:06.214 19:15:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:06.214 19:15:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:06.214 19:15:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:06.214 19:15:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:06.214 19:15:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:06.214 19:15:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:06.214 19:15:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:06.214 19:15:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:06.214 19:15:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:06.214 19:15:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.214 19:15:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.214 19:15:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:06.214 19:15:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.214 19:15:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:06.214 "name": "Existed_Raid", 00:17:06.214 "uuid": "c7916d88-1986-4da5-81ec-46824d37cf74", 00:17:06.214 "strip_size_kb": 64, 00:17:06.214 "state": "configuring", 00:17:06.214 "raid_level": "raid5f", 00:17:06.214 "superblock": true, 00:17:06.214 "num_base_bdevs": 4, 00:17:06.214 "num_base_bdevs_discovered": 1, 00:17:06.214 "num_base_bdevs_operational": 4, 00:17:06.214 "base_bdevs_list": [ 00:17:06.214 { 00:17:06.214 "name": "BaseBdev1", 00:17:06.214 "uuid": "b6830b76-04b7-4e61-bed1-16de51e81e56", 00:17:06.214 "is_configured": true, 00:17:06.214 "data_offset": 2048, 00:17:06.214 "data_size": 63488 00:17:06.214 }, 00:17:06.214 { 00:17:06.214 "name": "BaseBdev2", 00:17:06.214 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:06.214 "is_configured": false, 00:17:06.214 "data_offset": 0, 00:17:06.214 "data_size": 0 00:17:06.214 }, 00:17:06.214 { 00:17:06.214 "name": "BaseBdev3", 00:17:06.214 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:06.214 "is_configured": false, 00:17:06.214 "data_offset": 0, 00:17:06.214 "data_size": 0 00:17:06.214 }, 00:17:06.214 { 00:17:06.214 "name": "BaseBdev4", 00:17:06.214 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:06.214 "is_configured": false, 00:17:06.214 "data_offset": 0, 00:17:06.214 "data_size": 0 00:17:06.214 } 00:17:06.214 ] 00:17:06.214 }' 00:17:06.214 19:15:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:06.214 19:15:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.474 19:15:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:17:06.474 19:15:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.474 19:15:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.734 [2024-11-27 19:15:16.110755] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:06.734 BaseBdev2 00:17:06.734 19:15:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.734 19:15:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:17:06.734 19:15:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:17:06.734 19:15:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:06.734 19:15:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:17:06.734 19:15:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:06.734 19:15:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:06.734 19:15:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:06.734 19:15:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.734 19:15:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.734 19:15:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.734 19:15:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:06.734 19:15:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.734 19:15:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.734 [ 00:17:06.734 { 00:17:06.734 "name": "BaseBdev2", 00:17:06.734 "aliases": [ 00:17:06.734 "70e111ef-e5cc-4a50-9f07-f677eeeccdc8" 00:17:06.734 ], 00:17:06.734 "product_name": "Malloc disk", 00:17:06.734 "block_size": 512, 00:17:06.734 "num_blocks": 65536, 00:17:06.734 "uuid": "70e111ef-e5cc-4a50-9f07-f677eeeccdc8", 00:17:06.734 "assigned_rate_limits": { 00:17:06.734 "rw_ios_per_sec": 0, 00:17:06.734 "rw_mbytes_per_sec": 0, 00:17:06.734 "r_mbytes_per_sec": 0, 00:17:06.734 "w_mbytes_per_sec": 0 00:17:06.734 }, 00:17:06.734 "claimed": true, 00:17:06.734 "claim_type": "exclusive_write", 00:17:06.734 "zoned": false, 00:17:06.734 "supported_io_types": { 00:17:06.734 "read": true, 00:17:06.734 "write": true, 00:17:06.734 "unmap": true, 00:17:06.734 "flush": true, 00:17:06.734 "reset": true, 00:17:06.734 "nvme_admin": false, 00:17:06.734 "nvme_io": false, 00:17:06.734 "nvme_io_md": false, 00:17:06.734 "write_zeroes": true, 00:17:06.734 "zcopy": true, 00:17:06.734 "get_zone_info": false, 00:17:06.734 "zone_management": false, 00:17:06.734 "zone_append": false, 00:17:06.734 "compare": false, 00:17:06.734 "compare_and_write": false, 00:17:06.734 "abort": true, 00:17:06.734 "seek_hole": false, 00:17:06.734 "seek_data": false, 00:17:06.734 "copy": true, 00:17:06.734 "nvme_iov_md": false 00:17:06.734 }, 00:17:06.734 "memory_domains": [ 00:17:06.734 { 00:17:06.734 "dma_device_id": "system", 00:17:06.734 "dma_device_type": 1 00:17:06.734 }, 00:17:06.734 { 00:17:06.734 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:06.734 "dma_device_type": 2 00:17:06.734 } 00:17:06.734 ], 00:17:06.734 "driver_specific": {} 00:17:06.734 } 00:17:06.734 ] 00:17:06.734 19:15:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.734 19:15:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:17:06.734 19:15:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:06.734 19:15:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:06.734 19:15:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:06.734 19:15:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:06.734 19:15:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:06.734 19:15:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:06.734 19:15:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:06.734 19:15:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:06.734 19:15:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:06.734 19:15:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:06.734 19:15:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:06.734 19:15:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:06.734 19:15:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:06.734 19:15:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.734 19:15:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:06.734 19:15:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.734 19:15:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.734 19:15:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:06.734 "name": "Existed_Raid", 00:17:06.734 "uuid": "c7916d88-1986-4da5-81ec-46824d37cf74", 00:17:06.734 "strip_size_kb": 64, 00:17:06.734 "state": "configuring", 00:17:06.734 "raid_level": "raid5f", 00:17:06.734 "superblock": true, 00:17:06.734 "num_base_bdevs": 4, 00:17:06.734 "num_base_bdevs_discovered": 2, 00:17:06.734 "num_base_bdevs_operational": 4, 00:17:06.735 "base_bdevs_list": [ 00:17:06.735 { 00:17:06.735 "name": "BaseBdev1", 00:17:06.735 "uuid": "b6830b76-04b7-4e61-bed1-16de51e81e56", 00:17:06.735 "is_configured": true, 00:17:06.735 "data_offset": 2048, 00:17:06.735 "data_size": 63488 00:17:06.735 }, 00:17:06.735 { 00:17:06.735 "name": "BaseBdev2", 00:17:06.735 "uuid": "70e111ef-e5cc-4a50-9f07-f677eeeccdc8", 00:17:06.735 "is_configured": true, 00:17:06.735 "data_offset": 2048, 00:17:06.735 "data_size": 63488 00:17:06.735 }, 00:17:06.735 { 00:17:06.735 "name": "BaseBdev3", 00:17:06.735 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:06.735 "is_configured": false, 00:17:06.735 "data_offset": 0, 00:17:06.735 "data_size": 0 00:17:06.735 }, 00:17:06.735 { 00:17:06.735 "name": "BaseBdev4", 00:17:06.735 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:06.735 "is_configured": false, 00:17:06.735 "data_offset": 0, 00:17:06.735 "data_size": 0 00:17:06.735 } 00:17:06.735 ] 00:17:06.735 }' 00:17:06.735 19:15:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:06.735 19:15:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.995 19:15:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:17:06.995 19:15:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.995 19:15:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.995 [2024-11-27 19:15:16.617023] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:06.995 BaseBdev3 00:17:06.995 19:15:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.995 19:15:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:17:06.995 19:15:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:17:06.995 19:15:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:06.995 19:15:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:17:06.995 19:15:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:06.995 19:15:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:06.995 19:15:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:06.995 19:15:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.995 19:15:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:07.255 19:15:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.255 19:15:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:07.255 19:15:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.255 19:15:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:07.255 [ 00:17:07.255 { 00:17:07.255 "name": "BaseBdev3", 00:17:07.255 "aliases": [ 00:17:07.255 "105f50e4-4b30-4358-a40b-0c93a0e63e83" 00:17:07.255 ], 00:17:07.255 "product_name": "Malloc disk", 00:17:07.255 "block_size": 512, 00:17:07.255 "num_blocks": 65536, 00:17:07.255 "uuid": "105f50e4-4b30-4358-a40b-0c93a0e63e83", 00:17:07.255 "assigned_rate_limits": { 00:17:07.255 "rw_ios_per_sec": 0, 00:17:07.255 "rw_mbytes_per_sec": 0, 00:17:07.255 "r_mbytes_per_sec": 0, 00:17:07.255 "w_mbytes_per_sec": 0 00:17:07.255 }, 00:17:07.255 "claimed": true, 00:17:07.255 "claim_type": "exclusive_write", 00:17:07.255 "zoned": false, 00:17:07.255 "supported_io_types": { 00:17:07.255 "read": true, 00:17:07.255 "write": true, 00:17:07.255 "unmap": true, 00:17:07.255 "flush": true, 00:17:07.255 "reset": true, 00:17:07.255 "nvme_admin": false, 00:17:07.255 "nvme_io": false, 00:17:07.255 "nvme_io_md": false, 00:17:07.255 "write_zeroes": true, 00:17:07.255 "zcopy": true, 00:17:07.255 "get_zone_info": false, 00:17:07.255 "zone_management": false, 00:17:07.255 "zone_append": false, 00:17:07.255 "compare": false, 00:17:07.255 "compare_and_write": false, 00:17:07.255 "abort": true, 00:17:07.255 "seek_hole": false, 00:17:07.255 "seek_data": false, 00:17:07.255 "copy": true, 00:17:07.255 "nvme_iov_md": false 00:17:07.255 }, 00:17:07.255 "memory_domains": [ 00:17:07.255 { 00:17:07.255 "dma_device_id": "system", 00:17:07.255 "dma_device_type": 1 00:17:07.255 }, 00:17:07.255 { 00:17:07.255 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:07.255 "dma_device_type": 2 00:17:07.255 } 00:17:07.255 ], 00:17:07.255 "driver_specific": {} 00:17:07.255 } 00:17:07.255 ] 00:17:07.255 19:15:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.255 19:15:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:17:07.255 19:15:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:07.255 19:15:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:07.255 19:15:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:07.255 19:15:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:07.255 19:15:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:07.255 19:15:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:07.255 19:15:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:07.255 19:15:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:07.255 19:15:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:07.255 19:15:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:07.255 19:15:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:07.255 19:15:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:07.255 19:15:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:07.255 19:15:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:07.255 19:15:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.255 19:15:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:07.255 19:15:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.255 19:15:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:07.255 "name": "Existed_Raid", 00:17:07.255 "uuid": "c7916d88-1986-4da5-81ec-46824d37cf74", 00:17:07.255 "strip_size_kb": 64, 00:17:07.255 "state": "configuring", 00:17:07.255 "raid_level": "raid5f", 00:17:07.255 "superblock": true, 00:17:07.255 "num_base_bdevs": 4, 00:17:07.255 "num_base_bdevs_discovered": 3, 00:17:07.255 "num_base_bdevs_operational": 4, 00:17:07.255 "base_bdevs_list": [ 00:17:07.255 { 00:17:07.255 "name": "BaseBdev1", 00:17:07.255 "uuid": "b6830b76-04b7-4e61-bed1-16de51e81e56", 00:17:07.255 "is_configured": true, 00:17:07.255 "data_offset": 2048, 00:17:07.255 "data_size": 63488 00:17:07.255 }, 00:17:07.255 { 00:17:07.255 "name": "BaseBdev2", 00:17:07.255 "uuid": "70e111ef-e5cc-4a50-9f07-f677eeeccdc8", 00:17:07.255 "is_configured": true, 00:17:07.255 "data_offset": 2048, 00:17:07.255 "data_size": 63488 00:17:07.255 }, 00:17:07.255 { 00:17:07.255 "name": "BaseBdev3", 00:17:07.255 "uuid": "105f50e4-4b30-4358-a40b-0c93a0e63e83", 00:17:07.255 "is_configured": true, 00:17:07.255 "data_offset": 2048, 00:17:07.255 "data_size": 63488 00:17:07.255 }, 00:17:07.255 { 00:17:07.255 "name": "BaseBdev4", 00:17:07.255 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:07.255 "is_configured": false, 00:17:07.256 "data_offset": 0, 00:17:07.256 "data_size": 0 00:17:07.256 } 00:17:07.256 ] 00:17:07.256 }' 00:17:07.256 19:15:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:07.256 19:15:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:07.515 19:15:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:17:07.516 19:15:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.516 19:15:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:07.516 [2024-11-27 19:15:17.108097] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:07.516 [2024-11-27 19:15:17.108536] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:07.516 [2024-11-27 19:15:17.108558] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:07.516 [2024-11-27 19:15:17.108886] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:07.516 BaseBdev4 00:17:07.516 19:15:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.516 19:15:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:17:07.516 19:15:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:17:07.516 19:15:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:07.516 19:15:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:17:07.516 19:15:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:07.516 19:15:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:07.516 19:15:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:07.516 19:15:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.516 19:15:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:07.516 [2024-11-27 19:15:17.116070] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:07.516 [2024-11-27 19:15:17.116140] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:17:07.516 [2024-11-27 19:15:17.116461] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:07.516 19:15:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.516 19:15:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:17:07.516 19:15:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.516 19:15:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:07.516 [ 00:17:07.516 { 00:17:07.516 "name": "BaseBdev4", 00:17:07.516 "aliases": [ 00:17:07.516 "066d24f5-1a01-4aa1-a20d-6792a3c8d526" 00:17:07.516 ], 00:17:07.516 "product_name": "Malloc disk", 00:17:07.516 "block_size": 512, 00:17:07.516 "num_blocks": 65536, 00:17:07.516 "uuid": "066d24f5-1a01-4aa1-a20d-6792a3c8d526", 00:17:07.516 "assigned_rate_limits": { 00:17:07.516 "rw_ios_per_sec": 0, 00:17:07.516 "rw_mbytes_per_sec": 0, 00:17:07.516 "r_mbytes_per_sec": 0, 00:17:07.516 "w_mbytes_per_sec": 0 00:17:07.516 }, 00:17:07.516 "claimed": true, 00:17:07.516 "claim_type": "exclusive_write", 00:17:07.516 "zoned": false, 00:17:07.516 "supported_io_types": { 00:17:07.516 "read": true, 00:17:07.516 "write": true, 00:17:07.516 "unmap": true, 00:17:07.516 "flush": true, 00:17:07.516 "reset": true, 00:17:07.516 "nvme_admin": false, 00:17:07.516 "nvme_io": false, 00:17:07.516 "nvme_io_md": false, 00:17:07.516 "write_zeroes": true, 00:17:07.516 "zcopy": true, 00:17:07.516 "get_zone_info": false, 00:17:07.516 "zone_management": false, 00:17:07.516 "zone_append": false, 00:17:07.516 "compare": false, 00:17:07.516 "compare_and_write": false, 00:17:07.516 "abort": true, 00:17:07.516 "seek_hole": false, 00:17:07.516 "seek_data": false, 00:17:07.516 "copy": true, 00:17:07.516 "nvme_iov_md": false 00:17:07.516 }, 00:17:07.516 "memory_domains": [ 00:17:07.516 { 00:17:07.516 "dma_device_id": "system", 00:17:07.516 "dma_device_type": 1 00:17:07.516 }, 00:17:07.516 { 00:17:07.516 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:07.516 "dma_device_type": 2 00:17:07.516 } 00:17:07.516 ], 00:17:07.516 "driver_specific": {} 00:17:07.516 } 00:17:07.516 ] 00:17:07.775 19:15:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.775 19:15:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:17:07.775 19:15:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:07.775 19:15:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:07.775 19:15:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:17:07.775 19:15:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:07.775 19:15:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:07.775 19:15:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:07.775 19:15:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:07.775 19:15:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:07.775 19:15:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:07.775 19:15:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:07.775 19:15:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:07.775 19:15:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:07.775 19:15:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:07.775 19:15:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.775 19:15:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:07.775 19:15:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:07.775 19:15:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.775 19:15:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:07.775 "name": "Existed_Raid", 00:17:07.775 "uuid": "c7916d88-1986-4da5-81ec-46824d37cf74", 00:17:07.775 "strip_size_kb": 64, 00:17:07.775 "state": "online", 00:17:07.775 "raid_level": "raid5f", 00:17:07.775 "superblock": true, 00:17:07.775 "num_base_bdevs": 4, 00:17:07.775 "num_base_bdevs_discovered": 4, 00:17:07.775 "num_base_bdevs_operational": 4, 00:17:07.775 "base_bdevs_list": [ 00:17:07.775 { 00:17:07.775 "name": "BaseBdev1", 00:17:07.775 "uuid": "b6830b76-04b7-4e61-bed1-16de51e81e56", 00:17:07.775 "is_configured": true, 00:17:07.775 "data_offset": 2048, 00:17:07.775 "data_size": 63488 00:17:07.775 }, 00:17:07.775 { 00:17:07.775 "name": "BaseBdev2", 00:17:07.775 "uuid": "70e111ef-e5cc-4a50-9f07-f677eeeccdc8", 00:17:07.775 "is_configured": true, 00:17:07.775 "data_offset": 2048, 00:17:07.775 "data_size": 63488 00:17:07.775 }, 00:17:07.775 { 00:17:07.775 "name": "BaseBdev3", 00:17:07.775 "uuid": "105f50e4-4b30-4358-a40b-0c93a0e63e83", 00:17:07.775 "is_configured": true, 00:17:07.775 "data_offset": 2048, 00:17:07.775 "data_size": 63488 00:17:07.775 }, 00:17:07.775 { 00:17:07.775 "name": "BaseBdev4", 00:17:07.775 "uuid": "066d24f5-1a01-4aa1-a20d-6792a3c8d526", 00:17:07.775 "is_configured": true, 00:17:07.775 "data_offset": 2048, 00:17:07.775 "data_size": 63488 00:17:07.775 } 00:17:07.775 ] 00:17:07.775 }' 00:17:07.775 19:15:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:07.775 19:15:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.034 19:15:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:17:08.034 19:15:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:08.034 19:15:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:08.034 19:15:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:08.034 19:15:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:17:08.034 19:15:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:08.034 19:15:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:08.034 19:15:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:08.034 19:15:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.034 19:15:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.034 [2024-11-27 19:15:17.560723] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:08.034 19:15:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.034 19:15:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:08.034 "name": "Existed_Raid", 00:17:08.034 "aliases": [ 00:17:08.034 "c7916d88-1986-4da5-81ec-46824d37cf74" 00:17:08.034 ], 00:17:08.034 "product_name": "Raid Volume", 00:17:08.034 "block_size": 512, 00:17:08.034 "num_blocks": 190464, 00:17:08.034 "uuid": "c7916d88-1986-4da5-81ec-46824d37cf74", 00:17:08.034 "assigned_rate_limits": { 00:17:08.034 "rw_ios_per_sec": 0, 00:17:08.034 "rw_mbytes_per_sec": 0, 00:17:08.034 "r_mbytes_per_sec": 0, 00:17:08.034 "w_mbytes_per_sec": 0 00:17:08.034 }, 00:17:08.034 "claimed": false, 00:17:08.034 "zoned": false, 00:17:08.034 "supported_io_types": { 00:17:08.034 "read": true, 00:17:08.034 "write": true, 00:17:08.034 "unmap": false, 00:17:08.034 "flush": false, 00:17:08.034 "reset": true, 00:17:08.034 "nvme_admin": false, 00:17:08.034 "nvme_io": false, 00:17:08.035 "nvme_io_md": false, 00:17:08.035 "write_zeroes": true, 00:17:08.035 "zcopy": false, 00:17:08.035 "get_zone_info": false, 00:17:08.035 "zone_management": false, 00:17:08.035 "zone_append": false, 00:17:08.035 "compare": false, 00:17:08.035 "compare_and_write": false, 00:17:08.035 "abort": false, 00:17:08.035 "seek_hole": false, 00:17:08.035 "seek_data": false, 00:17:08.035 "copy": false, 00:17:08.035 "nvme_iov_md": false 00:17:08.035 }, 00:17:08.035 "driver_specific": { 00:17:08.035 "raid": { 00:17:08.035 "uuid": "c7916d88-1986-4da5-81ec-46824d37cf74", 00:17:08.035 "strip_size_kb": 64, 00:17:08.035 "state": "online", 00:17:08.035 "raid_level": "raid5f", 00:17:08.035 "superblock": true, 00:17:08.035 "num_base_bdevs": 4, 00:17:08.035 "num_base_bdevs_discovered": 4, 00:17:08.035 "num_base_bdevs_operational": 4, 00:17:08.035 "base_bdevs_list": [ 00:17:08.035 { 00:17:08.035 "name": "BaseBdev1", 00:17:08.035 "uuid": "b6830b76-04b7-4e61-bed1-16de51e81e56", 00:17:08.035 "is_configured": true, 00:17:08.035 "data_offset": 2048, 00:17:08.035 "data_size": 63488 00:17:08.035 }, 00:17:08.035 { 00:17:08.035 "name": "BaseBdev2", 00:17:08.035 "uuid": "70e111ef-e5cc-4a50-9f07-f677eeeccdc8", 00:17:08.035 "is_configured": true, 00:17:08.035 "data_offset": 2048, 00:17:08.035 "data_size": 63488 00:17:08.035 }, 00:17:08.035 { 00:17:08.035 "name": "BaseBdev3", 00:17:08.035 "uuid": "105f50e4-4b30-4358-a40b-0c93a0e63e83", 00:17:08.035 "is_configured": true, 00:17:08.035 "data_offset": 2048, 00:17:08.035 "data_size": 63488 00:17:08.035 }, 00:17:08.035 { 00:17:08.035 "name": "BaseBdev4", 00:17:08.035 "uuid": "066d24f5-1a01-4aa1-a20d-6792a3c8d526", 00:17:08.035 "is_configured": true, 00:17:08.035 "data_offset": 2048, 00:17:08.035 "data_size": 63488 00:17:08.035 } 00:17:08.035 ] 00:17:08.035 } 00:17:08.035 } 00:17:08.035 }' 00:17:08.035 19:15:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:08.035 19:15:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:17:08.035 BaseBdev2 00:17:08.035 BaseBdev3 00:17:08.035 BaseBdev4' 00:17:08.035 19:15:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:08.294 19:15:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:08.294 19:15:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:08.294 19:15:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:08.294 19:15:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:17:08.294 19:15:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.294 19:15:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.294 19:15:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.294 19:15:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:08.294 19:15:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:08.294 19:15:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:08.294 19:15:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:08.294 19:15:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:08.294 19:15:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.294 19:15:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.294 19:15:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.294 19:15:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:08.294 19:15:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:08.294 19:15:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:08.294 19:15:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:17:08.294 19:15:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:08.294 19:15:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.294 19:15:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.294 19:15:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.294 19:15:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:08.294 19:15:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:08.294 19:15:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:08.294 19:15:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:17:08.294 19:15:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:08.294 19:15:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.294 19:15:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.294 19:15:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.294 19:15:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:08.294 19:15:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:08.294 19:15:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:08.294 19:15:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.294 19:15:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.294 [2024-11-27 19:15:17.860004] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:08.554 19:15:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.554 19:15:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:17:08.554 19:15:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:17:08.554 19:15:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:08.554 19:15:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:17:08.554 19:15:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:17:08.554 19:15:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:17:08.554 19:15:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:08.554 19:15:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:08.554 19:15:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:08.554 19:15:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:08.554 19:15:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:08.554 19:15:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:08.554 19:15:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:08.554 19:15:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:08.554 19:15:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:08.554 19:15:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:08.554 19:15:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:08.554 19:15:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.554 19:15:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.554 19:15:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.554 19:15:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:08.554 "name": "Existed_Raid", 00:17:08.554 "uuid": "c7916d88-1986-4da5-81ec-46824d37cf74", 00:17:08.554 "strip_size_kb": 64, 00:17:08.554 "state": "online", 00:17:08.554 "raid_level": "raid5f", 00:17:08.554 "superblock": true, 00:17:08.554 "num_base_bdevs": 4, 00:17:08.554 "num_base_bdevs_discovered": 3, 00:17:08.554 "num_base_bdevs_operational": 3, 00:17:08.554 "base_bdevs_list": [ 00:17:08.554 { 00:17:08.554 "name": null, 00:17:08.554 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:08.554 "is_configured": false, 00:17:08.554 "data_offset": 0, 00:17:08.554 "data_size": 63488 00:17:08.554 }, 00:17:08.554 { 00:17:08.554 "name": "BaseBdev2", 00:17:08.554 "uuid": "70e111ef-e5cc-4a50-9f07-f677eeeccdc8", 00:17:08.554 "is_configured": true, 00:17:08.554 "data_offset": 2048, 00:17:08.554 "data_size": 63488 00:17:08.554 }, 00:17:08.554 { 00:17:08.554 "name": "BaseBdev3", 00:17:08.554 "uuid": "105f50e4-4b30-4358-a40b-0c93a0e63e83", 00:17:08.554 "is_configured": true, 00:17:08.554 "data_offset": 2048, 00:17:08.554 "data_size": 63488 00:17:08.554 }, 00:17:08.554 { 00:17:08.554 "name": "BaseBdev4", 00:17:08.554 "uuid": "066d24f5-1a01-4aa1-a20d-6792a3c8d526", 00:17:08.554 "is_configured": true, 00:17:08.554 "data_offset": 2048, 00:17:08.554 "data_size": 63488 00:17:08.554 } 00:17:08.554 ] 00:17:08.554 }' 00:17:08.554 19:15:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:08.554 19:15:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.815 19:15:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:17:08.815 19:15:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:08.815 19:15:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:08.815 19:15:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:08.815 19:15:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.815 19:15:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.815 19:15:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.815 19:15:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:08.815 19:15:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:08.815 19:15:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:17:08.815 19:15:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.815 19:15:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.815 [2024-11-27 19:15:18.405926] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:08.815 [2024-11-27 19:15:18.406118] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:09.075 [2024-11-27 19:15:18.510309] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:09.075 19:15:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.075 19:15:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:09.075 19:15:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:09.075 19:15:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:09.075 19:15:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.075 19:15:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:09.075 19:15:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.075 19:15:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.075 19:15:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:09.075 19:15:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:09.075 19:15:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:17:09.075 19:15:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.075 19:15:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.075 [2024-11-27 19:15:18.570245] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:09.075 19:15:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.075 19:15:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:09.075 19:15:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:09.075 19:15:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:09.075 19:15:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:09.075 19:15:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.075 19:15:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.075 19:15:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.335 19:15:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:09.335 19:15:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:09.335 19:15:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:17:09.336 19:15:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.336 19:15:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.336 [2024-11-27 19:15:18.731384] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:17:09.336 [2024-11-27 19:15:18.731515] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:17:09.336 19:15:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.336 19:15:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:09.336 19:15:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:09.336 19:15:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:09.336 19:15:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:17:09.336 19:15:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.336 19:15:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.336 19:15:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.336 19:15:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:17:09.336 19:15:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:17:09.336 19:15:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:17:09.336 19:15:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:17:09.336 19:15:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:09.336 19:15:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:17:09.336 19:15:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.336 19:15:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.336 BaseBdev2 00:17:09.336 19:15:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.336 19:15:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:17:09.336 19:15:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:17:09.336 19:15:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:09.336 19:15:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:17:09.336 19:15:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:09.336 19:15:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:09.336 19:15:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:09.336 19:15:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.336 19:15:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.336 19:15:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.336 19:15:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:09.336 19:15:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.336 19:15:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.336 [ 00:17:09.336 { 00:17:09.336 "name": "BaseBdev2", 00:17:09.336 "aliases": [ 00:17:09.336 "8e1619d8-873c-4440-9283-a315f6bb62d5" 00:17:09.336 ], 00:17:09.336 "product_name": "Malloc disk", 00:17:09.336 "block_size": 512, 00:17:09.336 "num_blocks": 65536, 00:17:09.336 "uuid": "8e1619d8-873c-4440-9283-a315f6bb62d5", 00:17:09.336 "assigned_rate_limits": { 00:17:09.336 "rw_ios_per_sec": 0, 00:17:09.336 "rw_mbytes_per_sec": 0, 00:17:09.336 "r_mbytes_per_sec": 0, 00:17:09.336 "w_mbytes_per_sec": 0 00:17:09.336 }, 00:17:09.336 "claimed": false, 00:17:09.336 "zoned": false, 00:17:09.336 "supported_io_types": { 00:17:09.336 "read": true, 00:17:09.336 "write": true, 00:17:09.336 "unmap": true, 00:17:09.336 "flush": true, 00:17:09.336 "reset": true, 00:17:09.336 "nvme_admin": false, 00:17:09.336 "nvme_io": false, 00:17:09.336 "nvme_io_md": false, 00:17:09.336 "write_zeroes": true, 00:17:09.336 "zcopy": true, 00:17:09.336 "get_zone_info": false, 00:17:09.336 "zone_management": false, 00:17:09.336 "zone_append": false, 00:17:09.336 "compare": false, 00:17:09.336 "compare_and_write": false, 00:17:09.336 "abort": true, 00:17:09.336 "seek_hole": false, 00:17:09.336 "seek_data": false, 00:17:09.336 "copy": true, 00:17:09.336 "nvme_iov_md": false 00:17:09.336 }, 00:17:09.336 "memory_domains": [ 00:17:09.336 { 00:17:09.336 "dma_device_id": "system", 00:17:09.336 "dma_device_type": 1 00:17:09.336 }, 00:17:09.336 { 00:17:09.336 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:09.336 "dma_device_type": 2 00:17:09.336 } 00:17:09.336 ], 00:17:09.336 "driver_specific": {} 00:17:09.336 } 00:17:09.336 ] 00:17:09.336 19:15:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.336 19:15:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:17:09.336 19:15:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:09.336 19:15:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:09.336 19:15:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:17:09.336 19:15:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.336 19:15:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.597 BaseBdev3 00:17:09.597 19:15:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.597 19:15:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:17:09.597 19:15:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:17:09.597 19:15:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:09.597 19:15:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:17:09.597 19:15:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:09.597 19:15:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:09.597 19:15:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:09.597 19:15:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.597 19:15:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.597 19:15:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.597 19:15:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:09.597 19:15:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.597 19:15:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.597 [ 00:17:09.597 { 00:17:09.597 "name": "BaseBdev3", 00:17:09.597 "aliases": [ 00:17:09.597 "09bcdef1-a644-40f2-900b-57765aca5640" 00:17:09.597 ], 00:17:09.597 "product_name": "Malloc disk", 00:17:09.597 "block_size": 512, 00:17:09.597 "num_blocks": 65536, 00:17:09.597 "uuid": "09bcdef1-a644-40f2-900b-57765aca5640", 00:17:09.597 "assigned_rate_limits": { 00:17:09.597 "rw_ios_per_sec": 0, 00:17:09.597 "rw_mbytes_per_sec": 0, 00:17:09.597 "r_mbytes_per_sec": 0, 00:17:09.597 "w_mbytes_per_sec": 0 00:17:09.597 }, 00:17:09.597 "claimed": false, 00:17:09.597 "zoned": false, 00:17:09.597 "supported_io_types": { 00:17:09.597 "read": true, 00:17:09.597 "write": true, 00:17:09.597 "unmap": true, 00:17:09.597 "flush": true, 00:17:09.597 "reset": true, 00:17:09.597 "nvme_admin": false, 00:17:09.597 "nvme_io": false, 00:17:09.597 "nvme_io_md": false, 00:17:09.597 "write_zeroes": true, 00:17:09.597 "zcopy": true, 00:17:09.597 "get_zone_info": false, 00:17:09.597 "zone_management": false, 00:17:09.597 "zone_append": false, 00:17:09.597 "compare": false, 00:17:09.597 "compare_and_write": false, 00:17:09.597 "abort": true, 00:17:09.597 "seek_hole": false, 00:17:09.597 "seek_data": false, 00:17:09.597 "copy": true, 00:17:09.597 "nvme_iov_md": false 00:17:09.597 }, 00:17:09.597 "memory_domains": [ 00:17:09.597 { 00:17:09.597 "dma_device_id": "system", 00:17:09.597 "dma_device_type": 1 00:17:09.597 }, 00:17:09.597 { 00:17:09.597 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:09.597 "dma_device_type": 2 00:17:09.597 } 00:17:09.597 ], 00:17:09.597 "driver_specific": {} 00:17:09.597 } 00:17:09.597 ] 00:17:09.597 19:15:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.597 19:15:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:17:09.597 19:15:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:09.597 19:15:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:09.597 19:15:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:17:09.597 19:15:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.597 19:15:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.597 BaseBdev4 00:17:09.597 19:15:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.597 19:15:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:17:09.597 19:15:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:17:09.597 19:15:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:09.597 19:15:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:17:09.597 19:15:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:09.597 19:15:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:09.597 19:15:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:09.597 19:15:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.597 19:15:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.597 19:15:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.597 19:15:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:17:09.597 19:15:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.597 19:15:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.597 [ 00:17:09.597 { 00:17:09.597 "name": "BaseBdev4", 00:17:09.597 "aliases": [ 00:17:09.597 "454882d8-c9ff-4c5f-bf95-81a8185b42f2" 00:17:09.597 ], 00:17:09.597 "product_name": "Malloc disk", 00:17:09.597 "block_size": 512, 00:17:09.597 "num_blocks": 65536, 00:17:09.597 "uuid": "454882d8-c9ff-4c5f-bf95-81a8185b42f2", 00:17:09.597 "assigned_rate_limits": { 00:17:09.597 "rw_ios_per_sec": 0, 00:17:09.597 "rw_mbytes_per_sec": 0, 00:17:09.597 "r_mbytes_per_sec": 0, 00:17:09.597 "w_mbytes_per_sec": 0 00:17:09.597 }, 00:17:09.597 "claimed": false, 00:17:09.597 "zoned": false, 00:17:09.597 "supported_io_types": { 00:17:09.597 "read": true, 00:17:09.597 "write": true, 00:17:09.597 "unmap": true, 00:17:09.597 "flush": true, 00:17:09.597 "reset": true, 00:17:09.597 "nvme_admin": false, 00:17:09.597 "nvme_io": false, 00:17:09.597 "nvme_io_md": false, 00:17:09.597 "write_zeroes": true, 00:17:09.597 "zcopy": true, 00:17:09.597 "get_zone_info": false, 00:17:09.597 "zone_management": false, 00:17:09.597 "zone_append": false, 00:17:09.597 "compare": false, 00:17:09.597 "compare_and_write": false, 00:17:09.597 "abort": true, 00:17:09.597 "seek_hole": false, 00:17:09.597 "seek_data": false, 00:17:09.597 "copy": true, 00:17:09.597 "nvme_iov_md": false 00:17:09.597 }, 00:17:09.597 "memory_domains": [ 00:17:09.597 { 00:17:09.597 "dma_device_id": "system", 00:17:09.597 "dma_device_type": 1 00:17:09.597 }, 00:17:09.597 { 00:17:09.597 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:09.597 "dma_device_type": 2 00:17:09.597 } 00:17:09.597 ], 00:17:09.597 "driver_specific": {} 00:17:09.597 } 00:17:09.597 ] 00:17:09.597 19:15:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.597 19:15:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:17:09.597 19:15:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:09.597 19:15:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:09.597 19:15:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:09.597 19:15:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.597 19:15:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.597 [2024-11-27 19:15:19.140278] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:09.597 [2024-11-27 19:15:19.140413] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:09.597 [2024-11-27 19:15:19.140467] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:09.597 [2024-11-27 19:15:19.142536] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:09.597 [2024-11-27 19:15:19.142647] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:09.597 19:15:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.597 19:15:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:09.597 19:15:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:09.597 19:15:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:09.597 19:15:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:09.597 19:15:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:09.597 19:15:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:09.597 19:15:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:09.597 19:15:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:09.597 19:15:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:09.597 19:15:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:09.598 19:15:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:09.598 19:15:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:09.598 19:15:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.598 19:15:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.598 19:15:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.598 19:15:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:09.598 "name": "Existed_Raid", 00:17:09.598 "uuid": "280e22bb-1054-431e-9405-c1832a773800", 00:17:09.598 "strip_size_kb": 64, 00:17:09.598 "state": "configuring", 00:17:09.598 "raid_level": "raid5f", 00:17:09.598 "superblock": true, 00:17:09.598 "num_base_bdevs": 4, 00:17:09.598 "num_base_bdevs_discovered": 3, 00:17:09.598 "num_base_bdevs_operational": 4, 00:17:09.598 "base_bdevs_list": [ 00:17:09.598 { 00:17:09.598 "name": "BaseBdev1", 00:17:09.598 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:09.598 "is_configured": false, 00:17:09.598 "data_offset": 0, 00:17:09.598 "data_size": 0 00:17:09.598 }, 00:17:09.598 { 00:17:09.598 "name": "BaseBdev2", 00:17:09.598 "uuid": "8e1619d8-873c-4440-9283-a315f6bb62d5", 00:17:09.598 "is_configured": true, 00:17:09.598 "data_offset": 2048, 00:17:09.598 "data_size": 63488 00:17:09.598 }, 00:17:09.598 { 00:17:09.598 "name": "BaseBdev3", 00:17:09.598 "uuid": "09bcdef1-a644-40f2-900b-57765aca5640", 00:17:09.598 "is_configured": true, 00:17:09.598 "data_offset": 2048, 00:17:09.598 "data_size": 63488 00:17:09.598 }, 00:17:09.598 { 00:17:09.598 "name": "BaseBdev4", 00:17:09.598 "uuid": "454882d8-c9ff-4c5f-bf95-81a8185b42f2", 00:17:09.598 "is_configured": true, 00:17:09.598 "data_offset": 2048, 00:17:09.598 "data_size": 63488 00:17:09.598 } 00:17:09.598 ] 00:17:09.598 }' 00:17:09.598 19:15:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:09.598 19:15:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:10.166 19:15:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:17:10.166 19:15:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.166 19:15:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:10.166 [2024-11-27 19:15:19.587778] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:10.166 19:15:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.166 19:15:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:10.166 19:15:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:10.166 19:15:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:10.166 19:15:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:10.166 19:15:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:10.166 19:15:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:10.166 19:15:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:10.166 19:15:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:10.166 19:15:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:10.166 19:15:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:10.166 19:15:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:10.166 19:15:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:10.166 19:15:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.166 19:15:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:10.166 19:15:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.166 19:15:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:10.166 "name": "Existed_Raid", 00:17:10.166 "uuid": "280e22bb-1054-431e-9405-c1832a773800", 00:17:10.166 "strip_size_kb": 64, 00:17:10.166 "state": "configuring", 00:17:10.166 "raid_level": "raid5f", 00:17:10.166 "superblock": true, 00:17:10.166 "num_base_bdevs": 4, 00:17:10.166 "num_base_bdevs_discovered": 2, 00:17:10.166 "num_base_bdevs_operational": 4, 00:17:10.166 "base_bdevs_list": [ 00:17:10.166 { 00:17:10.166 "name": "BaseBdev1", 00:17:10.166 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:10.166 "is_configured": false, 00:17:10.166 "data_offset": 0, 00:17:10.166 "data_size": 0 00:17:10.166 }, 00:17:10.166 { 00:17:10.166 "name": null, 00:17:10.166 "uuid": "8e1619d8-873c-4440-9283-a315f6bb62d5", 00:17:10.166 "is_configured": false, 00:17:10.166 "data_offset": 0, 00:17:10.166 "data_size": 63488 00:17:10.166 }, 00:17:10.166 { 00:17:10.166 "name": "BaseBdev3", 00:17:10.166 "uuid": "09bcdef1-a644-40f2-900b-57765aca5640", 00:17:10.166 "is_configured": true, 00:17:10.166 "data_offset": 2048, 00:17:10.166 "data_size": 63488 00:17:10.166 }, 00:17:10.166 { 00:17:10.166 "name": "BaseBdev4", 00:17:10.166 "uuid": "454882d8-c9ff-4c5f-bf95-81a8185b42f2", 00:17:10.166 "is_configured": true, 00:17:10.166 "data_offset": 2048, 00:17:10.166 "data_size": 63488 00:17:10.166 } 00:17:10.166 ] 00:17:10.166 }' 00:17:10.166 19:15:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:10.166 19:15:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:10.426 19:15:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:10.426 19:15:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:10.426 19:15:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.426 19:15:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:10.426 19:15:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.426 19:15:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:17:10.426 19:15:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:17:10.426 19:15:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.426 19:15:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:10.685 [2024-11-27 19:15:20.092913] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:10.685 BaseBdev1 00:17:10.685 19:15:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.685 19:15:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:17:10.685 19:15:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:17:10.685 19:15:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:10.685 19:15:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:17:10.685 19:15:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:10.685 19:15:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:10.685 19:15:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:10.685 19:15:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.685 19:15:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:10.685 19:15:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.685 19:15:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:10.685 19:15:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.685 19:15:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:10.685 [ 00:17:10.685 { 00:17:10.685 "name": "BaseBdev1", 00:17:10.685 "aliases": [ 00:17:10.685 "d7ea5392-af8b-4e44-bfd2-61444d0a5ade" 00:17:10.685 ], 00:17:10.685 "product_name": "Malloc disk", 00:17:10.685 "block_size": 512, 00:17:10.685 "num_blocks": 65536, 00:17:10.685 "uuid": "d7ea5392-af8b-4e44-bfd2-61444d0a5ade", 00:17:10.685 "assigned_rate_limits": { 00:17:10.685 "rw_ios_per_sec": 0, 00:17:10.685 "rw_mbytes_per_sec": 0, 00:17:10.685 "r_mbytes_per_sec": 0, 00:17:10.685 "w_mbytes_per_sec": 0 00:17:10.685 }, 00:17:10.685 "claimed": true, 00:17:10.685 "claim_type": "exclusive_write", 00:17:10.685 "zoned": false, 00:17:10.685 "supported_io_types": { 00:17:10.685 "read": true, 00:17:10.685 "write": true, 00:17:10.685 "unmap": true, 00:17:10.685 "flush": true, 00:17:10.685 "reset": true, 00:17:10.685 "nvme_admin": false, 00:17:10.685 "nvme_io": false, 00:17:10.685 "nvme_io_md": false, 00:17:10.685 "write_zeroes": true, 00:17:10.685 "zcopy": true, 00:17:10.685 "get_zone_info": false, 00:17:10.685 "zone_management": false, 00:17:10.685 "zone_append": false, 00:17:10.685 "compare": false, 00:17:10.685 "compare_and_write": false, 00:17:10.685 "abort": true, 00:17:10.685 "seek_hole": false, 00:17:10.685 "seek_data": false, 00:17:10.685 "copy": true, 00:17:10.685 "nvme_iov_md": false 00:17:10.685 }, 00:17:10.685 "memory_domains": [ 00:17:10.685 { 00:17:10.685 "dma_device_id": "system", 00:17:10.685 "dma_device_type": 1 00:17:10.685 }, 00:17:10.685 { 00:17:10.685 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:10.685 "dma_device_type": 2 00:17:10.685 } 00:17:10.685 ], 00:17:10.685 "driver_specific": {} 00:17:10.685 } 00:17:10.685 ] 00:17:10.685 19:15:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.685 19:15:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:17:10.685 19:15:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:10.685 19:15:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:10.685 19:15:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:10.685 19:15:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:10.685 19:15:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:10.685 19:15:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:10.685 19:15:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:10.685 19:15:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:10.685 19:15:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:10.685 19:15:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:10.685 19:15:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:10.685 19:15:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.685 19:15:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:10.685 19:15:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:10.685 19:15:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.685 19:15:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:10.685 "name": "Existed_Raid", 00:17:10.685 "uuid": "280e22bb-1054-431e-9405-c1832a773800", 00:17:10.685 "strip_size_kb": 64, 00:17:10.685 "state": "configuring", 00:17:10.685 "raid_level": "raid5f", 00:17:10.685 "superblock": true, 00:17:10.685 "num_base_bdevs": 4, 00:17:10.685 "num_base_bdevs_discovered": 3, 00:17:10.685 "num_base_bdevs_operational": 4, 00:17:10.685 "base_bdevs_list": [ 00:17:10.685 { 00:17:10.685 "name": "BaseBdev1", 00:17:10.685 "uuid": "d7ea5392-af8b-4e44-bfd2-61444d0a5ade", 00:17:10.685 "is_configured": true, 00:17:10.686 "data_offset": 2048, 00:17:10.686 "data_size": 63488 00:17:10.686 }, 00:17:10.686 { 00:17:10.686 "name": null, 00:17:10.686 "uuid": "8e1619d8-873c-4440-9283-a315f6bb62d5", 00:17:10.686 "is_configured": false, 00:17:10.686 "data_offset": 0, 00:17:10.686 "data_size": 63488 00:17:10.686 }, 00:17:10.686 { 00:17:10.686 "name": "BaseBdev3", 00:17:10.686 "uuid": "09bcdef1-a644-40f2-900b-57765aca5640", 00:17:10.686 "is_configured": true, 00:17:10.686 "data_offset": 2048, 00:17:10.686 "data_size": 63488 00:17:10.686 }, 00:17:10.686 { 00:17:10.686 "name": "BaseBdev4", 00:17:10.686 "uuid": "454882d8-c9ff-4c5f-bf95-81a8185b42f2", 00:17:10.686 "is_configured": true, 00:17:10.686 "data_offset": 2048, 00:17:10.686 "data_size": 63488 00:17:10.686 } 00:17:10.686 ] 00:17:10.686 }' 00:17:10.686 19:15:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:10.686 19:15:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:10.945 19:15:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:10.945 19:15:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:10.945 19:15:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.945 19:15:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:11.204 19:15:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.204 19:15:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:17:11.205 19:15:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:17:11.205 19:15:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.205 19:15:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:11.205 [2024-11-27 19:15:20.592107] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:11.205 19:15:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.205 19:15:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:11.205 19:15:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:11.205 19:15:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:11.205 19:15:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:11.205 19:15:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:11.205 19:15:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:11.205 19:15:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:11.205 19:15:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:11.205 19:15:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:11.205 19:15:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:11.205 19:15:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:11.205 19:15:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.205 19:15:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:11.205 19:15:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:11.205 19:15:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.205 19:15:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:11.205 "name": "Existed_Raid", 00:17:11.205 "uuid": "280e22bb-1054-431e-9405-c1832a773800", 00:17:11.205 "strip_size_kb": 64, 00:17:11.205 "state": "configuring", 00:17:11.205 "raid_level": "raid5f", 00:17:11.205 "superblock": true, 00:17:11.205 "num_base_bdevs": 4, 00:17:11.205 "num_base_bdevs_discovered": 2, 00:17:11.205 "num_base_bdevs_operational": 4, 00:17:11.205 "base_bdevs_list": [ 00:17:11.205 { 00:17:11.205 "name": "BaseBdev1", 00:17:11.205 "uuid": "d7ea5392-af8b-4e44-bfd2-61444d0a5ade", 00:17:11.205 "is_configured": true, 00:17:11.205 "data_offset": 2048, 00:17:11.205 "data_size": 63488 00:17:11.205 }, 00:17:11.205 { 00:17:11.205 "name": null, 00:17:11.205 "uuid": "8e1619d8-873c-4440-9283-a315f6bb62d5", 00:17:11.205 "is_configured": false, 00:17:11.205 "data_offset": 0, 00:17:11.205 "data_size": 63488 00:17:11.205 }, 00:17:11.205 { 00:17:11.205 "name": null, 00:17:11.205 "uuid": "09bcdef1-a644-40f2-900b-57765aca5640", 00:17:11.205 "is_configured": false, 00:17:11.205 "data_offset": 0, 00:17:11.205 "data_size": 63488 00:17:11.205 }, 00:17:11.205 { 00:17:11.205 "name": "BaseBdev4", 00:17:11.205 "uuid": "454882d8-c9ff-4c5f-bf95-81a8185b42f2", 00:17:11.205 "is_configured": true, 00:17:11.205 "data_offset": 2048, 00:17:11.205 "data_size": 63488 00:17:11.205 } 00:17:11.205 ] 00:17:11.205 }' 00:17:11.205 19:15:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:11.205 19:15:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:11.465 19:15:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:11.465 19:15:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.465 19:15:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:11.465 19:15:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:11.465 19:15:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.465 19:15:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:17:11.465 19:15:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:17:11.465 19:15:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.465 19:15:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:11.465 [2024-11-27 19:15:21.059860] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:11.465 19:15:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.465 19:15:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:11.465 19:15:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:11.465 19:15:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:11.465 19:15:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:11.465 19:15:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:11.465 19:15:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:11.465 19:15:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:11.465 19:15:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:11.465 19:15:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:11.465 19:15:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:11.465 19:15:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:11.465 19:15:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:11.465 19:15:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.465 19:15:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:11.465 19:15:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.725 19:15:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:11.725 "name": "Existed_Raid", 00:17:11.725 "uuid": "280e22bb-1054-431e-9405-c1832a773800", 00:17:11.725 "strip_size_kb": 64, 00:17:11.725 "state": "configuring", 00:17:11.725 "raid_level": "raid5f", 00:17:11.725 "superblock": true, 00:17:11.725 "num_base_bdevs": 4, 00:17:11.725 "num_base_bdevs_discovered": 3, 00:17:11.725 "num_base_bdevs_operational": 4, 00:17:11.725 "base_bdevs_list": [ 00:17:11.725 { 00:17:11.725 "name": "BaseBdev1", 00:17:11.725 "uuid": "d7ea5392-af8b-4e44-bfd2-61444d0a5ade", 00:17:11.725 "is_configured": true, 00:17:11.725 "data_offset": 2048, 00:17:11.725 "data_size": 63488 00:17:11.725 }, 00:17:11.725 { 00:17:11.725 "name": null, 00:17:11.725 "uuid": "8e1619d8-873c-4440-9283-a315f6bb62d5", 00:17:11.725 "is_configured": false, 00:17:11.725 "data_offset": 0, 00:17:11.725 "data_size": 63488 00:17:11.725 }, 00:17:11.725 { 00:17:11.725 "name": "BaseBdev3", 00:17:11.725 "uuid": "09bcdef1-a644-40f2-900b-57765aca5640", 00:17:11.725 "is_configured": true, 00:17:11.725 "data_offset": 2048, 00:17:11.725 "data_size": 63488 00:17:11.725 }, 00:17:11.725 { 00:17:11.725 "name": "BaseBdev4", 00:17:11.725 "uuid": "454882d8-c9ff-4c5f-bf95-81a8185b42f2", 00:17:11.725 "is_configured": true, 00:17:11.725 "data_offset": 2048, 00:17:11.725 "data_size": 63488 00:17:11.725 } 00:17:11.725 ] 00:17:11.725 }' 00:17:11.725 19:15:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:11.725 19:15:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:12.043 19:15:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:12.043 19:15:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.043 19:15:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:12.043 19:15:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:12.043 19:15:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.043 19:15:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:17:12.043 19:15:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:12.043 19:15:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.043 19:15:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:12.044 [2024-11-27 19:15:21.591477] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:12.303 19:15:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.303 19:15:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:12.303 19:15:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:12.303 19:15:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:12.303 19:15:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:12.303 19:15:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:12.303 19:15:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:12.303 19:15:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:12.303 19:15:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:12.303 19:15:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:12.303 19:15:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:12.303 19:15:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:12.303 19:15:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.303 19:15:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:12.303 19:15:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:12.303 19:15:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.303 19:15:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:12.303 "name": "Existed_Raid", 00:17:12.303 "uuid": "280e22bb-1054-431e-9405-c1832a773800", 00:17:12.303 "strip_size_kb": 64, 00:17:12.303 "state": "configuring", 00:17:12.303 "raid_level": "raid5f", 00:17:12.303 "superblock": true, 00:17:12.303 "num_base_bdevs": 4, 00:17:12.303 "num_base_bdevs_discovered": 2, 00:17:12.303 "num_base_bdevs_operational": 4, 00:17:12.303 "base_bdevs_list": [ 00:17:12.303 { 00:17:12.303 "name": null, 00:17:12.304 "uuid": "d7ea5392-af8b-4e44-bfd2-61444d0a5ade", 00:17:12.304 "is_configured": false, 00:17:12.304 "data_offset": 0, 00:17:12.304 "data_size": 63488 00:17:12.304 }, 00:17:12.304 { 00:17:12.304 "name": null, 00:17:12.304 "uuid": "8e1619d8-873c-4440-9283-a315f6bb62d5", 00:17:12.304 "is_configured": false, 00:17:12.304 "data_offset": 0, 00:17:12.304 "data_size": 63488 00:17:12.304 }, 00:17:12.304 { 00:17:12.304 "name": "BaseBdev3", 00:17:12.304 "uuid": "09bcdef1-a644-40f2-900b-57765aca5640", 00:17:12.304 "is_configured": true, 00:17:12.304 "data_offset": 2048, 00:17:12.304 "data_size": 63488 00:17:12.304 }, 00:17:12.304 { 00:17:12.304 "name": "BaseBdev4", 00:17:12.304 "uuid": "454882d8-c9ff-4c5f-bf95-81a8185b42f2", 00:17:12.304 "is_configured": true, 00:17:12.304 "data_offset": 2048, 00:17:12.304 "data_size": 63488 00:17:12.304 } 00:17:12.304 ] 00:17:12.304 }' 00:17:12.304 19:15:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:12.304 19:15:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:12.563 19:15:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:12.563 19:15:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:12.563 19:15:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.563 19:15:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:12.563 19:15:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.563 19:15:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:17:12.563 19:15:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:17:12.563 19:15:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.563 19:15:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:12.564 [2024-11-27 19:15:22.119891] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:12.564 19:15:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.564 19:15:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:12.564 19:15:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:12.564 19:15:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:12.564 19:15:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:12.564 19:15:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:12.564 19:15:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:12.564 19:15:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:12.564 19:15:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:12.564 19:15:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:12.564 19:15:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:12.564 19:15:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:12.564 19:15:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:12.564 19:15:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.564 19:15:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:12.564 19:15:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.564 19:15:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:12.564 "name": "Existed_Raid", 00:17:12.564 "uuid": "280e22bb-1054-431e-9405-c1832a773800", 00:17:12.564 "strip_size_kb": 64, 00:17:12.564 "state": "configuring", 00:17:12.564 "raid_level": "raid5f", 00:17:12.564 "superblock": true, 00:17:12.564 "num_base_bdevs": 4, 00:17:12.564 "num_base_bdevs_discovered": 3, 00:17:12.564 "num_base_bdevs_operational": 4, 00:17:12.564 "base_bdevs_list": [ 00:17:12.564 { 00:17:12.564 "name": null, 00:17:12.564 "uuid": "d7ea5392-af8b-4e44-bfd2-61444d0a5ade", 00:17:12.564 "is_configured": false, 00:17:12.564 "data_offset": 0, 00:17:12.564 "data_size": 63488 00:17:12.564 }, 00:17:12.564 { 00:17:12.564 "name": "BaseBdev2", 00:17:12.564 "uuid": "8e1619d8-873c-4440-9283-a315f6bb62d5", 00:17:12.564 "is_configured": true, 00:17:12.564 "data_offset": 2048, 00:17:12.564 "data_size": 63488 00:17:12.564 }, 00:17:12.564 { 00:17:12.564 "name": "BaseBdev3", 00:17:12.564 "uuid": "09bcdef1-a644-40f2-900b-57765aca5640", 00:17:12.564 "is_configured": true, 00:17:12.564 "data_offset": 2048, 00:17:12.564 "data_size": 63488 00:17:12.564 }, 00:17:12.564 { 00:17:12.564 "name": "BaseBdev4", 00:17:12.564 "uuid": "454882d8-c9ff-4c5f-bf95-81a8185b42f2", 00:17:12.564 "is_configured": true, 00:17:12.564 "data_offset": 2048, 00:17:12.564 "data_size": 63488 00:17:12.564 } 00:17:12.564 ] 00:17:12.564 }' 00:17:12.564 19:15:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:12.564 19:15:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:13.133 19:15:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:13.133 19:15:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.133 19:15:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:13.133 19:15:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:13.133 19:15:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.133 19:15:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:17:13.133 19:15:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:13.133 19:15:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:17:13.133 19:15:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.133 19:15:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:13.133 19:15:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.133 19:15:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u d7ea5392-af8b-4e44-bfd2-61444d0a5ade 00:17:13.133 19:15:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.133 19:15:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:13.133 [2024-11-27 19:15:22.733200] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:17:13.133 [2024-11-27 19:15:22.733582] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:17:13.133 [2024-11-27 19:15:22.733632] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:13.133 [2024-11-27 19:15:22.733963] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:17:13.133 NewBaseBdev 00:17:13.133 19:15:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.133 19:15:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:17:13.133 19:15:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:17:13.133 19:15:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:13.133 19:15:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:17:13.133 19:15:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:13.133 19:15:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:13.133 19:15:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:13.133 19:15:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.134 19:15:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:13.134 [2024-11-27 19:15:22.740803] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:17:13.134 [2024-11-27 19:15:22.740868] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:17:13.134 [2024-11-27 19:15:22.741206] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:13.134 19:15:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.134 19:15:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:17:13.134 19:15:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.134 19:15:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:13.134 [ 00:17:13.134 { 00:17:13.134 "name": "NewBaseBdev", 00:17:13.134 "aliases": [ 00:17:13.134 "d7ea5392-af8b-4e44-bfd2-61444d0a5ade" 00:17:13.134 ], 00:17:13.134 "product_name": "Malloc disk", 00:17:13.134 "block_size": 512, 00:17:13.134 "num_blocks": 65536, 00:17:13.134 "uuid": "d7ea5392-af8b-4e44-bfd2-61444d0a5ade", 00:17:13.134 "assigned_rate_limits": { 00:17:13.134 "rw_ios_per_sec": 0, 00:17:13.134 "rw_mbytes_per_sec": 0, 00:17:13.134 "r_mbytes_per_sec": 0, 00:17:13.134 "w_mbytes_per_sec": 0 00:17:13.134 }, 00:17:13.134 "claimed": true, 00:17:13.134 "claim_type": "exclusive_write", 00:17:13.134 "zoned": false, 00:17:13.134 "supported_io_types": { 00:17:13.134 "read": true, 00:17:13.134 "write": true, 00:17:13.394 "unmap": true, 00:17:13.394 "flush": true, 00:17:13.394 "reset": true, 00:17:13.394 "nvme_admin": false, 00:17:13.394 "nvme_io": false, 00:17:13.394 "nvme_io_md": false, 00:17:13.394 "write_zeroes": true, 00:17:13.394 "zcopy": true, 00:17:13.394 "get_zone_info": false, 00:17:13.394 "zone_management": false, 00:17:13.394 "zone_append": false, 00:17:13.394 "compare": false, 00:17:13.394 "compare_and_write": false, 00:17:13.394 "abort": true, 00:17:13.394 "seek_hole": false, 00:17:13.394 "seek_data": false, 00:17:13.394 "copy": true, 00:17:13.394 "nvme_iov_md": false 00:17:13.394 }, 00:17:13.394 "memory_domains": [ 00:17:13.394 { 00:17:13.394 "dma_device_id": "system", 00:17:13.394 "dma_device_type": 1 00:17:13.394 }, 00:17:13.394 { 00:17:13.394 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:13.394 "dma_device_type": 2 00:17:13.394 } 00:17:13.394 ], 00:17:13.394 "driver_specific": {} 00:17:13.394 } 00:17:13.394 ] 00:17:13.394 19:15:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.394 19:15:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:17:13.394 19:15:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:17:13.394 19:15:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:13.394 19:15:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:13.394 19:15:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:13.394 19:15:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:13.394 19:15:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:13.394 19:15:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:13.394 19:15:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:13.394 19:15:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:13.394 19:15:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:13.394 19:15:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:13.394 19:15:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:13.394 19:15:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.394 19:15:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:13.394 19:15:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.394 19:15:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:13.394 "name": "Existed_Raid", 00:17:13.394 "uuid": "280e22bb-1054-431e-9405-c1832a773800", 00:17:13.394 "strip_size_kb": 64, 00:17:13.394 "state": "online", 00:17:13.394 "raid_level": "raid5f", 00:17:13.394 "superblock": true, 00:17:13.394 "num_base_bdevs": 4, 00:17:13.394 "num_base_bdevs_discovered": 4, 00:17:13.394 "num_base_bdevs_operational": 4, 00:17:13.394 "base_bdevs_list": [ 00:17:13.394 { 00:17:13.394 "name": "NewBaseBdev", 00:17:13.394 "uuid": "d7ea5392-af8b-4e44-bfd2-61444d0a5ade", 00:17:13.394 "is_configured": true, 00:17:13.394 "data_offset": 2048, 00:17:13.394 "data_size": 63488 00:17:13.394 }, 00:17:13.394 { 00:17:13.394 "name": "BaseBdev2", 00:17:13.394 "uuid": "8e1619d8-873c-4440-9283-a315f6bb62d5", 00:17:13.394 "is_configured": true, 00:17:13.394 "data_offset": 2048, 00:17:13.394 "data_size": 63488 00:17:13.394 }, 00:17:13.394 { 00:17:13.394 "name": "BaseBdev3", 00:17:13.394 "uuid": "09bcdef1-a644-40f2-900b-57765aca5640", 00:17:13.394 "is_configured": true, 00:17:13.394 "data_offset": 2048, 00:17:13.394 "data_size": 63488 00:17:13.394 }, 00:17:13.394 { 00:17:13.394 "name": "BaseBdev4", 00:17:13.394 "uuid": "454882d8-c9ff-4c5f-bf95-81a8185b42f2", 00:17:13.394 "is_configured": true, 00:17:13.394 "data_offset": 2048, 00:17:13.394 "data_size": 63488 00:17:13.394 } 00:17:13.394 ] 00:17:13.394 }' 00:17:13.394 19:15:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:13.394 19:15:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:13.683 19:15:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:17:13.683 19:15:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:13.684 19:15:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:13.684 19:15:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:13.684 19:15:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:17:13.684 19:15:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:13.684 19:15:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:13.684 19:15:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:13.684 19:15:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.684 19:15:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:13.684 [2024-11-27 19:15:23.221715] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:13.684 19:15:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.684 19:15:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:13.684 "name": "Existed_Raid", 00:17:13.684 "aliases": [ 00:17:13.684 "280e22bb-1054-431e-9405-c1832a773800" 00:17:13.684 ], 00:17:13.684 "product_name": "Raid Volume", 00:17:13.684 "block_size": 512, 00:17:13.684 "num_blocks": 190464, 00:17:13.684 "uuid": "280e22bb-1054-431e-9405-c1832a773800", 00:17:13.684 "assigned_rate_limits": { 00:17:13.684 "rw_ios_per_sec": 0, 00:17:13.684 "rw_mbytes_per_sec": 0, 00:17:13.684 "r_mbytes_per_sec": 0, 00:17:13.684 "w_mbytes_per_sec": 0 00:17:13.684 }, 00:17:13.684 "claimed": false, 00:17:13.684 "zoned": false, 00:17:13.684 "supported_io_types": { 00:17:13.684 "read": true, 00:17:13.684 "write": true, 00:17:13.684 "unmap": false, 00:17:13.684 "flush": false, 00:17:13.684 "reset": true, 00:17:13.684 "nvme_admin": false, 00:17:13.684 "nvme_io": false, 00:17:13.684 "nvme_io_md": false, 00:17:13.684 "write_zeroes": true, 00:17:13.684 "zcopy": false, 00:17:13.684 "get_zone_info": false, 00:17:13.684 "zone_management": false, 00:17:13.684 "zone_append": false, 00:17:13.684 "compare": false, 00:17:13.684 "compare_and_write": false, 00:17:13.684 "abort": false, 00:17:13.684 "seek_hole": false, 00:17:13.684 "seek_data": false, 00:17:13.684 "copy": false, 00:17:13.684 "nvme_iov_md": false 00:17:13.684 }, 00:17:13.684 "driver_specific": { 00:17:13.684 "raid": { 00:17:13.684 "uuid": "280e22bb-1054-431e-9405-c1832a773800", 00:17:13.684 "strip_size_kb": 64, 00:17:13.684 "state": "online", 00:17:13.684 "raid_level": "raid5f", 00:17:13.684 "superblock": true, 00:17:13.684 "num_base_bdevs": 4, 00:17:13.684 "num_base_bdevs_discovered": 4, 00:17:13.684 "num_base_bdevs_operational": 4, 00:17:13.684 "base_bdevs_list": [ 00:17:13.684 { 00:17:13.684 "name": "NewBaseBdev", 00:17:13.684 "uuid": "d7ea5392-af8b-4e44-bfd2-61444d0a5ade", 00:17:13.684 "is_configured": true, 00:17:13.684 "data_offset": 2048, 00:17:13.684 "data_size": 63488 00:17:13.684 }, 00:17:13.684 { 00:17:13.684 "name": "BaseBdev2", 00:17:13.684 "uuid": "8e1619d8-873c-4440-9283-a315f6bb62d5", 00:17:13.684 "is_configured": true, 00:17:13.684 "data_offset": 2048, 00:17:13.684 "data_size": 63488 00:17:13.684 }, 00:17:13.684 { 00:17:13.684 "name": "BaseBdev3", 00:17:13.684 "uuid": "09bcdef1-a644-40f2-900b-57765aca5640", 00:17:13.684 "is_configured": true, 00:17:13.684 "data_offset": 2048, 00:17:13.684 "data_size": 63488 00:17:13.684 }, 00:17:13.684 { 00:17:13.684 "name": "BaseBdev4", 00:17:13.684 "uuid": "454882d8-c9ff-4c5f-bf95-81a8185b42f2", 00:17:13.684 "is_configured": true, 00:17:13.684 "data_offset": 2048, 00:17:13.684 "data_size": 63488 00:17:13.684 } 00:17:13.684 ] 00:17:13.684 } 00:17:13.684 } 00:17:13.684 }' 00:17:13.684 19:15:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:13.684 19:15:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:17:13.684 BaseBdev2 00:17:13.684 BaseBdev3 00:17:13.684 BaseBdev4' 00:17:13.684 19:15:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:13.944 19:15:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:13.944 19:15:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:13.944 19:15:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:17:13.944 19:15:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.944 19:15:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:13.944 19:15:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:13.944 19:15:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.944 19:15:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:13.944 19:15:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:13.944 19:15:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:13.944 19:15:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:13.944 19:15:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:13.944 19:15:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.944 19:15:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:13.944 19:15:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.944 19:15:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:13.944 19:15:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:13.944 19:15:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:13.944 19:15:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:17:13.944 19:15:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.944 19:15:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:13.944 19:15:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:13.944 19:15:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.944 19:15:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:13.944 19:15:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:13.944 19:15:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:13.944 19:15:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:13.944 19:15:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:17:13.944 19:15:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.944 19:15:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:13.944 19:15:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.944 19:15:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:13.944 19:15:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:13.944 19:15:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:13.944 19:15:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.944 19:15:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:13.944 [2024-11-27 19:15:23.540872] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:13.944 [2024-11-27 19:15:23.540910] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:13.944 [2024-11-27 19:15:23.541009] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:13.944 [2024-11-27 19:15:23.541335] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:13.944 [2024-11-27 19:15:23.541347] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:17:13.944 19:15:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.944 19:15:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 83514 00:17:13.944 19:15:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 83514 ']' 00:17:13.944 19:15:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 83514 00:17:13.944 19:15:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:17:13.944 19:15:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:13.944 19:15:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83514 00:17:14.204 19:15:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:14.204 killing process with pid 83514 00:17:14.204 19:15:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:14.204 19:15:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83514' 00:17:14.204 19:15:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 83514 00:17:14.204 [2024-11-27 19:15:23.591447] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:14.204 19:15:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 83514 00:17:14.464 [2024-11-27 19:15:24.011203] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:15.845 19:15:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:17:15.845 00:17:15.845 real 0m11.566s 00:17:15.845 user 0m17.929s 00:17:15.845 sys 0m2.384s 00:17:15.845 19:15:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:15.845 19:15:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:15.845 ************************************ 00:17:15.845 END TEST raid5f_state_function_test_sb 00:17:15.845 ************************************ 00:17:15.845 19:15:25 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 4 00:17:15.845 19:15:25 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:17:15.845 19:15:25 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:15.845 19:15:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:15.845 ************************************ 00:17:15.845 START TEST raid5f_superblock_test 00:17:15.845 ************************************ 00:17:15.846 19:15:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid5f 4 00:17:15.846 19:15:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:17:15.846 19:15:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:17:15.846 19:15:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:17:15.846 19:15:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:17:15.846 19:15:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:17:15.846 19:15:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:17:15.846 19:15:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:17:15.846 19:15:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:17:15.846 19:15:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:17:15.846 19:15:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:17:15.846 19:15:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:17:15.846 19:15:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:17:15.846 19:15:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:17:15.846 19:15:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:17:15.846 19:15:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:17:15.846 19:15:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:17:15.846 19:15:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=84180 00:17:15.846 19:15:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 84180 00:17:15.846 19:15:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:17:15.846 19:15:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 84180 ']' 00:17:15.846 19:15:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:15.846 19:15:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:15.846 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:15.846 19:15:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:15.846 19:15:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:15.846 19:15:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.846 [2024-11-27 19:15:25.394406] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:17:15.846 [2024-11-27 19:15:25.394533] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84180 ] 00:17:16.106 [2024-11-27 19:15:25.564566] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:16.106 [2024-11-27 19:15:25.708911] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:16.365 [2024-11-27 19:15:25.947674] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:16.365 [2024-11-27 19:15:25.947832] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:16.626 19:15:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:16.626 19:15:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:17:16.626 19:15:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:17:16.626 19:15:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:16.626 19:15:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:17:16.626 19:15:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:17:16.626 19:15:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:16.626 19:15:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:16.626 19:15:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:16.626 19:15:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:16.626 19:15:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:17:16.626 19:15:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.626 19:15:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:16.626 malloc1 00:17:16.626 19:15:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.626 19:15:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:16.626 19:15:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.626 19:15:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:16.626 [2024-11-27 19:15:26.243642] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:16.626 [2024-11-27 19:15:26.243819] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:16.626 [2024-11-27 19:15:26.243870] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:16.626 [2024-11-27 19:15:26.243902] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:16.626 [2024-11-27 19:15:26.246258] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:16.626 [2024-11-27 19:15:26.246336] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:16.626 pt1 00:17:16.626 19:15:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.626 19:15:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:16.626 19:15:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:16.626 19:15:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:17:16.626 19:15:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:17:16.626 19:15:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:16.626 19:15:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:16.626 19:15:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:16.626 19:15:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:16.626 19:15:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:17:16.626 19:15:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.626 19:15:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:16.887 malloc2 00:17:16.887 19:15:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.887 19:15:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:16.887 19:15:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.887 19:15:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:16.887 [2024-11-27 19:15:26.307894] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:16.887 [2024-11-27 19:15:26.308032] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:16.887 [2024-11-27 19:15:26.308065] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:16.887 [2024-11-27 19:15:26.308074] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:16.887 [2024-11-27 19:15:26.310438] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:16.887 [2024-11-27 19:15:26.310476] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:16.887 pt2 00:17:16.887 19:15:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.887 19:15:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:16.887 19:15:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:16.887 19:15:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:17:16.887 19:15:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:17:16.887 19:15:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:17:16.887 19:15:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:16.887 19:15:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:16.887 19:15:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:16.887 19:15:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:17:16.887 19:15:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.887 19:15:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:16.887 malloc3 00:17:16.887 19:15:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.887 19:15:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:16.887 19:15:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.887 19:15:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:16.887 [2024-11-27 19:15:26.380168] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:16.887 [2024-11-27 19:15:26.380293] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:16.887 [2024-11-27 19:15:26.380332] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:16.887 [2024-11-27 19:15:26.380382] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:16.887 [2024-11-27 19:15:26.382717] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:16.887 [2024-11-27 19:15:26.382784] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:16.887 pt3 00:17:16.887 19:15:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.887 19:15:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:16.887 19:15:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:16.887 19:15:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:17:16.887 19:15:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:17:16.887 19:15:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:17:16.887 19:15:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:16.887 19:15:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:16.887 19:15:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:16.887 19:15:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:17:16.887 19:15:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.887 19:15:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:16.887 malloc4 00:17:16.887 19:15:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.887 19:15:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:17:16.887 19:15:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.887 19:15:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:16.887 [2024-11-27 19:15:26.446313] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:17:16.887 [2024-11-27 19:15:26.446459] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:16.887 [2024-11-27 19:15:26.446501] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:17:16.887 [2024-11-27 19:15:26.446534] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:16.887 [2024-11-27 19:15:26.448950] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:16.887 [2024-11-27 19:15:26.449028] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:17:16.887 pt4 00:17:16.887 19:15:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.887 19:15:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:16.887 19:15:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:16.887 19:15:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:17:16.887 19:15:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.887 19:15:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:16.887 [2024-11-27 19:15:26.458335] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:16.887 [2024-11-27 19:15:26.460497] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:16.887 [2024-11-27 19:15:26.460635] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:16.887 [2024-11-27 19:15:26.460722] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:17:16.887 [2024-11-27 19:15:26.460968] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:16.887 [2024-11-27 19:15:26.461020] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:16.888 [2024-11-27 19:15:26.461303] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:16.888 [2024-11-27 19:15:26.468385] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:16.888 [2024-11-27 19:15:26.468445] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:16.888 [2024-11-27 19:15:26.468667] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:16.888 19:15:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.888 19:15:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:17:16.888 19:15:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:16.888 19:15:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:16.888 19:15:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:16.888 19:15:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:16.888 19:15:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:16.888 19:15:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:16.888 19:15:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:16.888 19:15:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:16.888 19:15:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:16.888 19:15:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:16.888 19:15:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.888 19:15:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:16.888 19:15:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:16.888 19:15:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.888 19:15:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:16.888 "name": "raid_bdev1", 00:17:16.888 "uuid": "364454d3-48c4-4179-84ca-46b8ca9fcf27", 00:17:16.888 "strip_size_kb": 64, 00:17:16.888 "state": "online", 00:17:16.888 "raid_level": "raid5f", 00:17:16.888 "superblock": true, 00:17:16.888 "num_base_bdevs": 4, 00:17:16.888 "num_base_bdevs_discovered": 4, 00:17:16.888 "num_base_bdevs_operational": 4, 00:17:16.888 "base_bdevs_list": [ 00:17:16.888 { 00:17:16.888 "name": "pt1", 00:17:16.888 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:16.888 "is_configured": true, 00:17:16.888 "data_offset": 2048, 00:17:16.888 "data_size": 63488 00:17:16.888 }, 00:17:16.888 { 00:17:16.888 "name": "pt2", 00:17:16.888 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:16.888 "is_configured": true, 00:17:16.888 "data_offset": 2048, 00:17:16.888 "data_size": 63488 00:17:16.888 }, 00:17:16.888 { 00:17:16.888 "name": "pt3", 00:17:16.888 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:16.888 "is_configured": true, 00:17:16.888 "data_offset": 2048, 00:17:16.888 "data_size": 63488 00:17:16.888 }, 00:17:16.888 { 00:17:16.888 "name": "pt4", 00:17:16.888 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:16.888 "is_configured": true, 00:17:16.888 "data_offset": 2048, 00:17:16.888 "data_size": 63488 00:17:16.888 } 00:17:16.888 ] 00:17:16.888 }' 00:17:16.888 19:15:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:17.147 19:15:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:17.407 19:15:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:17:17.407 19:15:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:17.407 19:15:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:17.407 19:15:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:17.407 19:15:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:17:17.407 19:15:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:17.407 19:15:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:17.407 19:15:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:17.407 19:15:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.407 19:15:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:17.407 [2024-11-27 19:15:26.857788] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:17.407 19:15:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.407 19:15:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:17.407 "name": "raid_bdev1", 00:17:17.407 "aliases": [ 00:17:17.407 "364454d3-48c4-4179-84ca-46b8ca9fcf27" 00:17:17.407 ], 00:17:17.407 "product_name": "Raid Volume", 00:17:17.407 "block_size": 512, 00:17:17.407 "num_blocks": 190464, 00:17:17.407 "uuid": "364454d3-48c4-4179-84ca-46b8ca9fcf27", 00:17:17.407 "assigned_rate_limits": { 00:17:17.407 "rw_ios_per_sec": 0, 00:17:17.407 "rw_mbytes_per_sec": 0, 00:17:17.407 "r_mbytes_per_sec": 0, 00:17:17.407 "w_mbytes_per_sec": 0 00:17:17.407 }, 00:17:17.407 "claimed": false, 00:17:17.407 "zoned": false, 00:17:17.407 "supported_io_types": { 00:17:17.407 "read": true, 00:17:17.407 "write": true, 00:17:17.407 "unmap": false, 00:17:17.407 "flush": false, 00:17:17.407 "reset": true, 00:17:17.407 "nvme_admin": false, 00:17:17.407 "nvme_io": false, 00:17:17.407 "nvme_io_md": false, 00:17:17.407 "write_zeroes": true, 00:17:17.407 "zcopy": false, 00:17:17.407 "get_zone_info": false, 00:17:17.407 "zone_management": false, 00:17:17.407 "zone_append": false, 00:17:17.407 "compare": false, 00:17:17.407 "compare_and_write": false, 00:17:17.407 "abort": false, 00:17:17.407 "seek_hole": false, 00:17:17.407 "seek_data": false, 00:17:17.407 "copy": false, 00:17:17.407 "nvme_iov_md": false 00:17:17.407 }, 00:17:17.407 "driver_specific": { 00:17:17.407 "raid": { 00:17:17.407 "uuid": "364454d3-48c4-4179-84ca-46b8ca9fcf27", 00:17:17.407 "strip_size_kb": 64, 00:17:17.407 "state": "online", 00:17:17.407 "raid_level": "raid5f", 00:17:17.407 "superblock": true, 00:17:17.407 "num_base_bdevs": 4, 00:17:17.407 "num_base_bdevs_discovered": 4, 00:17:17.407 "num_base_bdevs_operational": 4, 00:17:17.407 "base_bdevs_list": [ 00:17:17.407 { 00:17:17.407 "name": "pt1", 00:17:17.407 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:17.407 "is_configured": true, 00:17:17.407 "data_offset": 2048, 00:17:17.407 "data_size": 63488 00:17:17.407 }, 00:17:17.407 { 00:17:17.407 "name": "pt2", 00:17:17.407 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:17.407 "is_configured": true, 00:17:17.407 "data_offset": 2048, 00:17:17.407 "data_size": 63488 00:17:17.407 }, 00:17:17.407 { 00:17:17.407 "name": "pt3", 00:17:17.407 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:17.407 "is_configured": true, 00:17:17.407 "data_offset": 2048, 00:17:17.407 "data_size": 63488 00:17:17.407 }, 00:17:17.407 { 00:17:17.407 "name": "pt4", 00:17:17.407 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:17.407 "is_configured": true, 00:17:17.407 "data_offset": 2048, 00:17:17.407 "data_size": 63488 00:17:17.407 } 00:17:17.407 ] 00:17:17.407 } 00:17:17.407 } 00:17:17.407 }' 00:17:17.407 19:15:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:17.407 19:15:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:17.407 pt2 00:17:17.407 pt3 00:17:17.407 pt4' 00:17:17.407 19:15:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:17.407 19:15:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:17.408 19:15:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:17.408 19:15:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:17.408 19:15:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:17.408 19:15:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.408 19:15:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:17.408 19:15:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.408 19:15:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:17.408 19:15:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:17.408 19:15:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:17.408 19:15:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:17.408 19:15:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:17.408 19:15:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.408 19:15:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:17.670 19:15:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.670 19:15:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:17.670 19:15:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:17.670 19:15:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:17.670 19:15:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:17:17.670 19:15:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:17.670 19:15:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.670 19:15:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:17.670 19:15:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.670 19:15:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:17.670 19:15:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:17.670 19:15:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:17.670 19:15:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:17:17.670 19:15:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:17.670 19:15:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.670 19:15:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:17.670 19:15:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.670 19:15:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:17.670 19:15:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:17.670 19:15:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:17.670 19:15:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.670 19:15:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:17.670 19:15:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:17:17.670 [2024-11-27 19:15:27.169149] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:17.670 19:15:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.670 19:15:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=364454d3-48c4-4179-84ca-46b8ca9fcf27 00:17:17.670 19:15:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 364454d3-48c4-4179-84ca-46b8ca9fcf27 ']' 00:17:17.670 19:15:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:17.670 19:15:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.670 19:15:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:17.670 [2024-11-27 19:15:27.216890] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:17.670 [2024-11-27 19:15:27.216919] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:17.670 [2024-11-27 19:15:27.217007] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:17.670 [2024-11-27 19:15:27.217100] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:17.670 [2024-11-27 19:15:27.217114] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:17.670 19:15:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.670 19:15:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:17.670 19:15:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:17:17.670 19:15:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.670 19:15:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:17.670 19:15:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.670 19:15:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:17:17.670 19:15:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:17:17.670 19:15:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:17.670 19:15:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:17:17.670 19:15:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.670 19:15:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:17.670 19:15:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.670 19:15:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:17.670 19:15:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:17:17.670 19:15:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.670 19:15:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:17.670 19:15:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.670 19:15:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:17.670 19:15:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:17:17.670 19:15:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.670 19:15:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:17.930 19:15:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.930 19:15:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:17.930 19:15:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:17:17.930 19:15:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.930 19:15:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:17.930 19:15:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.930 19:15:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:17:17.930 19:15:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:17.930 19:15:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.930 19:15:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:17.930 19:15:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.930 19:15:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:17:17.930 19:15:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:17:17.930 19:15:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:17:17.930 19:15:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:17:17.930 19:15:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:17.930 19:15:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:17.930 19:15:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:17.930 19:15:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:17.930 19:15:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:17:17.930 19:15:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.930 19:15:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:17.930 [2024-11-27 19:15:27.380646] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:17.930 [2024-11-27 19:15:27.382910] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:17.930 [2024-11-27 19:15:27.382959] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:17:17.930 [2024-11-27 19:15:27.382993] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:17:17.930 [2024-11-27 19:15:27.383046] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:17:17.930 [2024-11-27 19:15:27.383097] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:17:17.930 [2024-11-27 19:15:27.383116] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:17:17.930 [2024-11-27 19:15:27.383135] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:17:17.930 [2024-11-27 19:15:27.383148] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:17.930 [2024-11-27 19:15:27.383160] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:17:17.930 request: 00:17:17.930 { 00:17:17.930 "name": "raid_bdev1", 00:17:17.930 "raid_level": "raid5f", 00:17:17.930 "base_bdevs": [ 00:17:17.930 "malloc1", 00:17:17.930 "malloc2", 00:17:17.930 "malloc3", 00:17:17.930 "malloc4" 00:17:17.930 ], 00:17:17.930 "strip_size_kb": 64, 00:17:17.930 "superblock": false, 00:17:17.930 "method": "bdev_raid_create", 00:17:17.930 "req_id": 1 00:17:17.930 } 00:17:17.930 Got JSON-RPC error response 00:17:17.930 response: 00:17:17.930 { 00:17:17.930 "code": -17, 00:17:17.930 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:17.930 } 00:17:17.930 19:15:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:17.930 19:15:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:17:17.930 19:15:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:17.930 19:15:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:17.930 19:15:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:17.930 19:15:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:17:17.930 19:15:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:17.930 19:15:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.930 19:15:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:17.930 19:15:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.930 19:15:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:17:17.930 19:15:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:17:17.930 19:15:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:17.930 19:15:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.930 19:15:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:17.930 [2024-11-27 19:15:27.436511] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:17.930 [2024-11-27 19:15:27.436613] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:17.930 [2024-11-27 19:15:27.436646] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:17:17.930 [2024-11-27 19:15:27.436676] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:17.930 [2024-11-27 19:15:27.439203] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:17.930 [2024-11-27 19:15:27.439291] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:17.930 [2024-11-27 19:15:27.439398] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:17.930 [2024-11-27 19:15:27.439511] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:17.930 pt1 00:17:17.930 19:15:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.930 19:15:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:17:17.930 19:15:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:17.930 19:15:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:17.930 19:15:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:17.930 19:15:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:17.930 19:15:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:17.930 19:15:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:17.930 19:15:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:17.930 19:15:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:17.930 19:15:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:17.930 19:15:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:17.930 19:15:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:17.930 19:15:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.930 19:15:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:17.930 19:15:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.930 19:15:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:17.930 "name": "raid_bdev1", 00:17:17.930 "uuid": "364454d3-48c4-4179-84ca-46b8ca9fcf27", 00:17:17.930 "strip_size_kb": 64, 00:17:17.930 "state": "configuring", 00:17:17.931 "raid_level": "raid5f", 00:17:17.931 "superblock": true, 00:17:17.931 "num_base_bdevs": 4, 00:17:17.931 "num_base_bdevs_discovered": 1, 00:17:17.931 "num_base_bdevs_operational": 4, 00:17:17.931 "base_bdevs_list": [ 00:17:17.931 { 00:17:17.931 "name": "pt1", 00:17:17.931 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:17.931 "is_configured": true, 00:17:17.931 "data_offset": 2048, 00:17:17.931 "data_size": 63488 00:17:17.931 }, 00:17:17.931 { 00:17:17.931 "name": null, 00:17:17.931 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:17.931 "is_configured": false, 00:17:17.931 "data_offset": 2048, 00:17:17.931 "data_size": 63488 00:17:17.931 }, 00:17:17.931 { 00:17:17.931 "name": null, 00:17:17.931 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:17.931 "is_configured": false, 00:17:17.931 "data_offset": 2048, 00:17:17.931 "data_size": 63488 00:17:17.931 }, 00:17:17.931 { 00:17:17.931 "name": null, 00:17:17.931 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:17.931 "is_configured": false, 00:17:17.931 "data_offset": 2048, 00:17:17.931 "data_size": 63488 00:17:17.931 } 00:17:17.931 ] 00:17:17.931 }' 00:17:17.931 19:15:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:17.931 19:15:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:18.500 19:15:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:17:18.500 19:15:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:18.500 19:15:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.500 19:15:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:18.500 [2024-11-27 19:15:27.867908] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:18.500 [2024-11-27 19:15:27.868077] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:18.500 [2024-11-27 19:15:27.868105] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:17:18.500 [2024-11-27 19:15:27.868117] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:18.500 [2024-11-27 19:15:27.868676] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:18.501 [2024-11-27 19:15:27.868698] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:18.501 [2024-11-27 19:15:27.868819] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:18.501 [2024-11-27 19:15:27.868850] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:18.501 pt2 00:17:18.501 19:15:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.501 19:15:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:17:18.501 19:15:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.501 19:15:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:18.501 [2024-11-27 19:15:27.875888] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:17:18.501 19:15:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.501 19:15:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:17:18.501 19:15:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:18.501 19:15:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:18.501 19:15:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:18.501 19:15:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:18.501 19:15:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:18.501 19:15:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:18.501 19:15:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:18.501 19:15:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:18.501 19:15:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:18.501 19:15:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:18.501 19:15:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:18.501 19:15:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.501 19:15:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:18.501 19:15:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.501 19:15:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:18.501 "name": "raid_bdev1", 00:17:18.501 "uuid": "364454d3-48c4-4179-84ca-46b8ca9fcf27", 00:17:18.501 "strip_size_kb": 64, 00:17:18.501 "state": "configuring", 00:17:18.501 "raid_level": "raid5f", 00:17:18.501 "superblock": true, 00:17:18.501 "num_base_bdevs": 4, 00:17:18.501 "num_base_bdevs_discovered": 1, 00:17:18.501 "num_base_bdevs_operational": 4, 00:17:18.501 "base_bdevs_list": [ 00:17:18.501 { 00:17:18.501 "name": "pt1", 00:17:18.501 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:18.501 "is_configured": true, 00:17:18.501 "data_offset": 2048, 00:17:18.501 "data_size": 63488 00:17:18.501 }, 00:17:18.501 { 00:17:18.501 "name": null, 00:17:18.501 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:18.501 "is_configured": false, 00:17:18.501 "data_offset": 0, 00:17:18.501 "data_size": 63488 00:17:18.501 }, 00:17:18.501 { 00:17:18.501 "name": null, 00:17:18.501 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:18.501 "is_configured": false, 00:17:18.501 "data_offset": 2048, 00:17:18.501 "data_size": 63488 00:17:18.501 }, 00:17:18.501 { 00:17:18.501 "name": null, 00:17:18.501 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:18.501 "is_configured": false, 00:17:18.501 "data_offset": 2048, 00:17:18.501 "data_size": 63488 00:17:18.501 } 00:17:18.501 ] 00:17:18.501 }' 00:17:18.501 19:15:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:18.501 19:15:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:18.760 19:15:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:17:18.760 19:15:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:18.760 19:15:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:18.760 19:15:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.760 19:15:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:18.760 [2024-11-27 19:15:28.323889] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:18.760 [2024-11-27 19:15:28.324053] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:18.760 [2024-11-27 19:15:28.324093] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:17:18.760 [2024-11-27 19:15:28.324122] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:18.760 [2024-11-27 19:15:28.324688] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:18.760 [2024-11-27 19:15:28.324760] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:18.760 [2024-11-27 19:15:28.324894] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:18.760 [2024-11-27 19:15:28.324948] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:18.760 pt2 00:17:18.760 19:15:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.760 19:15:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:17:18.760 19:15:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:18.760 19:15:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:18.761 19:15:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.761 19:15:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:18.761 [2024-11-27 19:15:28.335843] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:18.761 [2024-11-27 19:15:28.335893] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:18.761 [2024-11-27 19:15:28.335920] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:17:18.761 [2024-11-27 19:15:28.335930] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:18.761 [2024-11-27 19:15:28.336313] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:18.761 [2024-11-27 19:15:28.336327] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:18.761 [2024-11-27 19:15:28.336391] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:17:18.761 [2024-11-27 19:15:28.336416] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:18.761 pt3 00:17:18.761 19:15:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.761 19:15:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:17:18.761 19:15:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:18.761 19:15:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:17:18.761 19:15:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.761 19:15:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:18.761 [2024-11-27 19:15:28.347795] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:17:18.761 [2024-11-27 19:15:28.347839] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:18.761 [2024-11-27 19:15:28.347855] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:17:18.761 [2024-11-27 19:15:28.347862] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:18.761 [2024-11-27 19:15:28.348240] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:18.761 [2024-11-27 19:15:28.348255] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:17:18.761 [2024-11-27 19:15:28.348312] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:17:18.761 [2024-11-27 19:15:28.348332] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:17:18.761 [2024-11-27 19:15:28.348469] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:18.761 [2024-11-27 19:15:28.348478] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:18.761 [2024-11-27 19:15:28.348737] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:18.761 [2024-11-27 19:15:28.355541] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:18.761 pt4 00:17:18.761 [2024-11-27 19:15:28.355615] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:17:18.761 [2024-11-27 19:15:28.355843] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:18.761 19:15:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.761 19:15:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:17:18.761 19:15:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:18.761 19:15:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:17:18.761 19:15:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:18.761 19:15:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:18.761 19:15:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:18.761 19:15:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:18.761 19:15:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:18.761 19:15:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:18.761 19:15:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:18.761 19:15:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:18.761 19:15:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:18.761 19:15:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:18.761 19:15:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:18.761 19:15:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.761 19:15:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:18.761 19:15:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.020 19:15:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:19.020 "name": "raid_bdev1", 00:17:19.020 "uuid": "364454d3-48c4-4179-84ca-46b8ca9fcf27", 00:17:19.020 "strip_size_kb": 64, 00:17:19.020 "state": "online", 00:17:19.020 "raid_level": "raid5f", 00:17:19.020 "superblock": true, 00:17:19.020 "num_base_bdevs": 4, 00:17:19.020 "num_base_bdevs_discovered": 4, 00:17:19.020 "num_base_bdevs_operational": 4, 00:17:19.020 "base_bdevs_list": [ 00:17:19.020 { 00:17:19.020 "name": "pt1", 00:17:19.020 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:19.020 "is_configured": true, 00:17:19.020 "data_offset": 2048, 00:17:19.020 "data_size": 63488 00:17:19.020 }, 00:17:19.020 { 00:17:19.020 "name": "pt2", 00:17:19.020 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:19.020 "is_configured": true, 00:17:19.021 "data_offset": 2048, 00:17:19.021 "data_size": 63488 00:17:19.021 }, 00:17:19.021 { 00:17:19.021 "name": "pt3", 00:17:19.021 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:19.021 "is_configured": true, 00:17:19.021 "data_offset": 2048, 00:17:19.021 "data_size": 63488 00:17:19.021 }, 00:17:19.021 { 00:17:19.021 "name": "pt4", 00:17:19.021 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:19.021 "is_configured": true, 00:17:19.021 "data_offset": 2048, 00:17:19.021 "data_size": 63488 00:17:19.021 } 00:17:19.021 ] 00:17:19.021 }' 00:17:19.021 19:15:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:19.021 19:15:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:19.280 19:15:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:17:19.280 19:15:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:19.280 19:15:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:19.280 19:15:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:19.280 19:15:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:17:19.280 19:15:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:19.280 19:15:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:19.280 19:15:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:19.280 19:15:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.280 19:15:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:19.280 [2024-11-27 19:15:28.772704] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:19.280 19:15:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.280 19:15:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:19.281 "name": "raid_bdev1", 00:17:19.281 "aliases": [ 00:17:19.281 "364454d3-48c4-4179-84ca-46b8ca9fcf27" 00:17:19.281 ], 00:17:19.281 "product_name": "Raid Volume", 00:17:19.281 "block_size": 512, 00:17:19.281 "num_blocks": 190464, 00:17:19.281 "uuid": "364454d3-48c4-4179-84ca-46b8ca9fcf27", 00:17:19.281 "assigned_rate_limits": { 00:17:19.281 "rw_ios_per_sec": 0, 00:17:19.281 "rw_mbytes_per_sec": 0, 00:17:19.281 "r_mbytes_per_sec": 0, 00:17:19.281 "w_mbytes_per_sec": 0 00:17:19.281 }, 00:17:19.281 "claimed": false, 00:17:19.281 "zoned": false, 00:17:19.281 "supported_io_types": { 00:17:19.281 "read": true, 00:17:19.281 "write": true, 00:17:19.281 "unmap": false, 00:17:19.281 "flush": false, 00:17:19.281 "reset": true, 00:17:19.281 "nvme_admin": false, 00:17:19.281 "nvme_io": false, 00:17:19.281 "nvme_io_md": false, 00:17:19.281 "write_zeroes": true, 00:17:19.281 "zcopy": false, 00:17:19.281 "get_zone_info": false, 00:17:19.281 "zone_management": false, 00:17:19.281 "zone_append": false, 00:17:19.281 "compare": false, 00:17:19.281 "compare_and_write": false, 00:17:19.281 "abort": false, 00:17:19.281 "seek_hole": false, 00:17:19.281 "seek_data": false, 00:17:19.281 "copy": false, 00:17:19.281 "nvme_iov_md": false 00:17:19.281 }, 00:17:19.281 "driver_specific": { 00:17:19.281 "raid": { 00:17:19.281 "uuid": "364454d3-48c4-4179-84ca-46b8ca9fcf27", 00:17:19.281 "strip_size_kb": 64, 00:17:19.281 "state": "online", 00:17:19.281 "raid_level": "raid5f", 00:17:19.281 "superblock": true, 00:17:19.281 "num_base_bdevs": 4, 00:17:19.281 "num_base_bdevs_discovered": 4, 00:17:19.281 "num_base_bdevs_operational": 4, 00:17:19.281 "base_bdevs_list": [ 00:17:19.281 { 00:17:19.281 "name": "pt1", 00:17:19.281 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:19.281 "is_configured": true, 00:17:19.281 "data_offset": 2048, 00:17:19.281 "data_size": 63488 00:17:19.281 }, 00:17:19.281 { 00:17:19.281 "name": "pt2", 00:17:19.281 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:19.281 "is_configured": true, 00:17:19.281 "data_offset": 2048, 00:17:19.281 "data_size": 63488 00:17:19.281 }, 00:17:19.281 { 00:17:19.281 "name": "pt3", 00:17:19.281 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:19.281 "is_configured": true, 00:17:19.281 "data_offset": 2048, 00:17:19.281 "data_size": 63488 00:17:19.281 }, 00:17:19.281 { 00:17:19.281 "name": "pt4", 00:17:19.281 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:19.281 "is_configured": true, 00:17:19.281 "data_offset": 2048, 00:17:19.281 "data_size": 63488 00:17:19.281 } 00:17:19.281 ] 00:17:19.281 } 00:17:19.281 } 00:17:19.281 }' 00:17:19.281 19:15:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:19.281 19:15:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:19.281 pt2 00:17:19.281 pt3 00:17:19.281 pt4' 00:17:19.281 19:15:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:19.281 19:15:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:19.281 19:15:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:19.281 19:15:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:19.281 19:15:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:19.281 19:15:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.281 19:15:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:19.281 19:15:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.541 19:15:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:19.541 19:15:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:19.541 19:15:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:19.541 19:15:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:19.541 19:15:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:19.541 19:15:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.541 19:15:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:19.541 19:15:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.541 19:15:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:19.541 19:15:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:19.541 19:15:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:19.541 19:15:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:19.541 19:15:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:17:19.541 19:15:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.541 19:15:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:19.541 19:15:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.541 19:15:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:19.541 19:15:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:19.541 19:15:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:19.541 19:15:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:17:19.541 19:15:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.541 19:15:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:19.541 19:15:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:19.541 19:15:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.541 19:15:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:19.541 19:15:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:19.541 19:15:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:19.541 19:15:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.541 19:15:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:19.541 19:15:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:17:19.541 [2024-11-27 19:15:29.064069] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:19.541 19:15:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.541 19:15:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 364454d3-48c4-4179-84ca-46b8ca9fcf27 '!=' 364454d3-48c4-4179-84ca-46b8ca9fcf27 ']' 00:17:19.541 19:15:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:17:19.541 19:15:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:19.541 19:15:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:17:19.541 19:15:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:17:19.541 19:15:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.541 19:15:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:19.541 [2024-11-27 19:15:29.111882] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:17:19.541 19:15:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.541 19:15:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:19.541 19:15:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:19.541 19:15:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:19.541 19:15:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:19.541 19:15:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:19.541 19:15:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:19.541 19:15:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:19.541 19:15:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:19.541 19:15:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:19.541 19:15:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:19.542 19:15:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:19.542 19:15:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:19.542 19:15:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.542 19:15:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:19.542 19:15:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.542 19:15:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:19.542 "name": "raid_bdev1", 00:17:19.542 "uuid": "364454d3-48c4-4179-84ca-46b8ca9fcf27", 00:17:19.542 "strip_size_kb": 64, 00:17:19.542 "state": "online", 00:17:19.542 "raid_level": "raid5f", 00:17:19.542 "superblock": true, 00:17:19.542 "num_base_bdevs": 4, 00:17:19.542 "num_base_bdevs_discovered": 3, 00:17:19.542 "num_base_bdevs_operational": 3, 00:17:19.542 "base_bdevs_list": [ 00:17:19.542 { 00:17:19.542 "name": null, 00:17:19.542 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:19.542 "is_configured": false, 00:17:19.542 "data_offset": 0, 00:17:19.542 "data_size": 63488 00:17:19.542 }, 00:17:19.542 { 00:17:19.542 "name": "pt2", 00:17:19.542 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:19.542 "is_configured": true, 00:17:19.542 "data_offset": 2048, 00:17:19.542 "data_size": 63488 00:17:19.542 }, 00:17:19.542 { 00:17:19.542 "name": "pt3", 00:17:19.542 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:19.542 "is_configured": true, 00:17:19.542 "data_offset": 2048, 00:17:19.542 "data_size": 63488 00:17:19.542 }, 00:17:19.542 { 00:17:19.542 "name": "pt4", 00:17:19.542 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:19.542 "is_configured": true, 00:17:19.542 "data_offset": 2048, 00:17:19.542 "data_size": 63488 00:17:19.542 } 00:17:19.542 ] 00:17:19.542 }' 00:17:19.542 19:15:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:19.542 19:15:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:20.108 19:15:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:20.108 19:15:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.108 19:15:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:20.108 [2024-11-27 19:15:29.543867] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:20.108 [2024-11-27 19:15:29.543986] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:20.108 [2024-11-27 19:15:29.544108] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:20.108 [2024-11-27 19:15:29.544218] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:20.108 [2024-11-27 19:15:29.544268] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:17:20.108 19:15:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.108 19:15:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:20.108 19:15:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:17:20.108 19:15:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.108 19:15:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:20.108 19:15:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.108 19:15:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:17:20.108 19:15:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:17:20.108 19:15:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:17:20.108 19:15:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:20.108 19:15:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:17:20.108 19:15:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.108 19:15:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:20.108 19:15:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.108 19:15:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:17:20.108 19:15:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:20.108 19:15:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:17:20.108 19:15:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.108 19:15:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:20.108 19:15:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.108 19:15:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:17:20.109 19:15:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:20.109 19:15:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:17:20.109 19:15:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.109 19:15:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:20.109 19:15:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.109 19:15:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:17:20.109 19:15:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:20.109 19:15:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:17:20.109 19:15:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:17:20.109 19:15:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:20.109 19:15:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.109 19:15:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:20.109 [2024-11-27 19:15:29.643823] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:20.109 [2024-11-27 19:15:29.643883] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:20.109 [2024-11-27 19:15:29.643904] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:17:20.109 [2024-11-27 19:15:29.643913] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:20.109 [2024-11-27 19:15:29.646482] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:20.109 [2024-11-27 19:15:29.646519] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:20.109 [2024-11-27 19:15:29.646607] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:20.109 [2024-11-27 19:15:29.646656] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:20.109 pt2 00:17:20.109 19:15:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.109 19:15:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:17:20.109 19:15:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:20.109 19:15:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:20.109 19:15:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:20.109 19:15:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:20.109 19:15:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:20.109 19:15:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:20.109 19:15:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:20.109 19:15:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:20.109 19:15:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:20.109 19:15:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:20.109 19:15:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:20.109 19:15:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.109 19:15:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:20.109 19:15:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.109 19:15:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:20.109 "name": "raid_bdev1", 00:17:20.109 "uuid": "364454d3-48c4-4179-84ca-46b8ca9fcf27", 00:17:20.109 "strip_size_kb": 64, 00:17:20.109 "state": "configuring", 00:17:20.109 "raid_level": "raid5f", 00:17:20.109 "superblock": true, 00:17:20.109 "num_base_bdevs": 4, 00:17:20.109 "num_base_bdevs_discovered": 1, 00:17:20.109 "num_base_bdevs_operational": 3, 00:17:20.109 "base_bdevs_list": [ 00:17:20.109 { 00:17:20.109 "name": null, 00:17:20.109 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:20.109 "is_configured": false, 00:17:20.109 "data_offset": 2048, 00:17:20.109 "data_size": 63488 00:17:20.109 }, 00:17:20.109 { 00:17:20.109 "name": "pt2", 00:17:20.109 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:20.109 "is_configured": true, 00:17:20.109 "data_offset": 2048, 00:17:20.109 "data_size": 63488 00:17:20.109 }, 00:17:20.109 { 00:17:20.109 "name": null, 00:17:20.109 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:20.109 "is_configured": false, 00:17:20.109 "data_offset": 2048, 00:17:20.109 "data_size": 63488 00:17:20.109 }, 00:17:20.109 { 00:17:20.109 "name": null, 00:17:20.109 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:20.109 "is_configured": false, 00:17:20.109 "data_offset": 2048, 00:17:20.109 "data_size": 63488 00:17:20.109 } 00:17:20.109 ] 00:17:20.109 }' 00:17:20.109 19:15:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:20.109 19:15:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:20.678 19:15:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:17:20.678 19:15:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:17:20.678 19:15:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:20.678 19:15:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.678 19:15:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:20.678 [2024-11-27 19:15:30.119897] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:20.678 [2024-11-27 19:15:30.120067] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:20.678 [2024-11-27 19:15:30.120112] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:17:20.678 [2024-11-27 19:15:30.120160] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:20.678 [2024-11-27 19:15:30.120733] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:20.678 [2024-11-27 19:15:30.120795] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:20.678 [2024-11-27 19:15:30.120925] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:17:20.678 [2024-11-27 19:15:30.120979] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:20.678 pt3 00:17:20.678 19:15:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.678 19:15:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:17:20.678 19:15:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:20.678 19:15:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:20.678 19:15:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:20.678 19:15:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:20.678 19:15:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:20.678 19:15:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:20.678 19:15:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:20.678 19:15:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:20.678 19:15:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:20.678 19:15:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:20.678 19:15:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:20.678 19:15:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.678 19:15:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:20.678 19:15:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.678 19:15:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:20.678 "name": "raid_bdev1", 00:17:20.678 "uuid": "364454d3-48c4-4179-84ca-46b8ca9fcf27", 00:17:20.678 "strip_size_kb": 64, 00:17:20.678 "state": "configuring", 00:17:20.678 "raid_level": "raid5f", 00:17:20.678 "superblock": true, 00:17:20.678 "num_base_bdevs": 4, 00:17:20.678 "num_base_bdevs_discovered": 2, 00:17:20.678 "num_base_bdevs_operational": 3, 00:17:20.678 "base_bdevs_list": [ 00:17:20.678 { 00:17:20.678 "name": null, 00:17:20.678 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:20.678 "is_configured": false, 00:17:20.678 "data_offset": 2048, 00:17:20.678 "data_size": 63488 00:17:20.678 }, 00:17:20.678 { 00:17:20.678 "name": "pt2", 00:17:20.678 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:20.678 "is_configured": true, 00:17:20.678 "data_offset": 2048, 00:17:20.678 "data_size": 63488 00:17:20.678 }, 00:17:20.678 { 00:17:20.678 "name": "pt3", 00:17:20.678 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:20.678 "is_configured": true, 00:17:20.678 "data_offset": 2048, 00:17:20.678 "data_size": 63488 00:17:20.678 }, 00:17:20.678 { 00:17:20.678 "name": null, 00:17:20.678 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:20.678 "is_configured": false, 00:17:20.678 "data_offset": 2048, 00:17:20.678 "data_size": 63488 00:17:20.678 } 00:17:20.678 ] 00:17:20.678 }' 00:17:20.678 19:15:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:20.678 19:15:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:20.937 19:15:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:17:20.937 19:15:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:17:20.937 19:15:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:17:20.937 19:15:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:17:20.937 19:15:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.937 19:15:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:20.937 [2024-11-27 19:15:30.559905] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:17:20.938 [2024-11-27 19:15:30.559988] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:20.938 [2024-11-27 19:15:30.560015] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:17:20.938 [2024-11-27 19:15:30.560023] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:20.938 [2024-11-27 19:15:30.560549] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:20.938 [2024-11-27 19:15:30.560568] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:17:20.938 [2024-11-27 19:15:30.560666] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:17:20.938 [2024-11-27 19:15:30.560713] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:17:20.938 [2024-11-27 19:15:30.560860] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:17:20.938 [2024-11-27 19:15:30.560869] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:20.938 [2024-11-27 19:15:30.561151] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:17:20.938 [2024-11-27 19:15:30.568023] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:17:20.938 [2024-11-27 19:15:30.568050] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:17:20.938 [2024-11-27 19:15:30.568381] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:20.938 pt4 00:17:20.938 19:15:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.938 19:15:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:20.938 19:15:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:20.938 19:15:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:20.938 19:15:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:20.938 19:15:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:20.938 19:15:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:20.938 19:15:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:20.938 19:15:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:20.938 19:15:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:20.938 19:15:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:21.197 19:15:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:21.197 19:15:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:21.197 19:15:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.197 19:15:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:21.197 19:15:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.197 19:15:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:21.197 "name": "raid_bdev1", 00:17:21.197 "uuid": "364454d3-48c4-4179-84ca-46b8ca9fcf27", 00:17:21.197 "strip_size_kb": 64, 00:17:21.197 "state": "online", 00:17:21.197 "raid_level": "raid5f", 00:17:21.197 "superblock": true, 00:17:21.197 "num_base_bdevs": 4, 00:17:21.197 "num_base_bdevs_discovered": 3, 00:17:21.197 "num_base_bdevs_operational": 3, 00:17:21.197 "base_bdevs_list": [ 00:17:21.197 { 00:17:21.197 "name": null, 00:17:21.197 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:21.197 "is_configured": false, 00:17:21.197 "data_offset": 2048, 00:17:21.197 "data_size": 63488 00:17:21.197 }, 00:17:21.197 { 00:17:21.197 "name": "pt2", 00:17:21.197 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:21.197 "is_configured": true, 00:17:21.197 "data_offset": 2048, 00:17:21.197 "data_size": 63488 00:17:21.197 }, 00:17:21.197 { 00:17:21.197 "name": "pt3", 00:17:21.197 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:21.197 "is_configured": true, 00:17:21.197 "data_offset": 2048, 00:17:21.197 "data_size": 63488 00:17:21.197 }, 00:17:21.197 { 00:17:21.197 "name": "pt4", 00:17:21.197 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:21.197 "is_configured": true, 00:17:21.197 "data_offset": 2048, 00:17:21.197 "data_size": 63488 00:17:21.197 } 00:17:21.197 ] 00:17:21.197 }' 00:17:21.197 19:15:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:21.197 19:15:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:21.457 19:15:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:21.457 19:15:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.457 19:15:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:21.457 [2024-11-27 19:15:31.001539] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:21.457 [2024-11-27 19:15:31.001657] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:21.457 [2024-11-27 19:15:31.001786] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:21.457 [2024-11-27 19:15:31.001904] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:21.457 [2024-11-27 19:15:31.001967] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:17:21.457 19:15:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.457 19:15:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:17:21.457 19:15:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:21.457 19:15:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.457 19:15:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:21.457 19:15:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.457 19:15:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:17:21.457 19:15:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:17:21.457 19:15:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:17:21.457 19:15:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:17:21.457 19:15:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:17:21.457 19:15:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.457 19:15:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:21.457 19:15:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.457 19:15:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:21.457 19:15:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.457 19:15:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:21.457 [2024-11-27 19:15:31.061397] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:21.457 [2024-11-27 19:15:31.061475] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:21.457 [2024-11-27 19:15:31.061504] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:17:21.457 [2024-11-27 19:15:31.061518] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:21.457 [2024-11-27 19:15:31.064203] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:21.457 [2024-11-27 19:15:31.064245] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:21.457 [2024-11-27 19:15:31.064341] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:21.457 [2024-11-27 19:15:31.064394] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:21.457 [2024-11-27 19:15:31.064545] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:17:21.457 [2024-11-27 19:15:31.064559] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:21.457 [2024-11-27 19:15:31.064575] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:17:21.457 [2024-11-27 19:15:31.064642] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:21.457 [2024-11-27 19:15:31.064785] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:21.457 pt1 00:17:21.457 19:15:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.457 19:15:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:17:21.457 19:15:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:17:21.457 19:15:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:21.457 19:15:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:21.457 19:15:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:21.457 19:15:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:21.457 19:15:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:21.457 19:15:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:21.457 19:15:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:21.457 19:15:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:21.457 19:15:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:21.457 19:15:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:21.457 19:15:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:21.457 19:15:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.457 19:15:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:21.717 19:15:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.717 19:15:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:21.717 "name": "raid_bdev1", 00:17:21.717 "uuid": "364454d3-48c4-4179-84ca-46b8ca9fcf27", 00:17:21.717 "strip_size_kb": 64, 00:17:21.717 "state": "configuring", 00:17:21.717 "raid_level": "raid5f", 00:17:21.717 "superblock": true, 00:17:21.717 "num_base_bdevs": 4, 00:17:21.717 "num_base_bdevs_discovered": 2, 00:17:21.717 "num_base_bdevs_operational": 3, 00:17:21.717 "base_bdevs_list": [ 00:17:21.717 { 00:17:21.717 "name": null, 00:17:21.717 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:21.717 "is_configured": false, 00:17:21.717 "data_offset": 2048, 00:17:21.717 "data_size": 63488 00:17:21.717 }, 00:17:21.717 { 00:17:21.717 "name": "pt2", 00:17:21.717 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:21.717 "is_configured": true, 00:17:21.717 "data_offset": 2048, 00:17:21.717 "data_size": 63488 00:17:21.717 }, 00:17:21.717 { 00:17:21.717 "name": "pt3", 00:17:21.717 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:21.717 "is_configured": true, 00:17:21.717 "data_offset": 2048, 00:17:21.717 "data_size": 63488 00:17:21.717 }, 00:17:21.717 { 00:17:21.717 "name": null, 00:17:21.717 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:21.717 "is_configured": false, 00:17:21.717 "data_offset": 2048, 00:17:21.717 "data_size": 63488 00:17:21.717 } 00:17:21.717 ] 00:17:21.717 }' 00:17:21.717 19:15:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:21.717 19:15:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:21.977 19:15:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:17:21.977 19:15:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:17:21.977 19:15:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.977 19:15:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:21.977 19:15:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.977 19:15:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:17:21.977 19:15:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:17:21.977 19:15:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.977 19:15:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:21.977 [2024-11-27 19:15:31.560610] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:17:21.977 [2024-11-27 19:15:31.560688] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:21.977 [2024-11-27 19:15:31.560747] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:17:21.977 [2024-11-27 19:15:31.560756] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:21.977 [2024-11-27 19:15:31.561326] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:21.977 [2024-11-27 19:15:31.561355] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:17:21.977 [2024-11-27 19:15:31.561453] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:17:21.977 [2024-11-27 19:15:31.561478] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:17:21.977 [2024-11-27 19:15:31.561641] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:17:21.977 [2024-11-27 19:15:31.561650] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:21.977 [2024-11-27 19:15:31.561947] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:17:21.977 [2024-11-27 19:15:31.568905] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:17:21.977 [2024-11-27 19:15:31.568931] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:17:21.977 [2024-11-27 19:15:31.569209] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:21.977 pt4 00:17:21.977 19:15:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.977 19:15:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:21.977 19:15:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:21.977 19:15:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:21.977 19:15:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:21.977 19:15:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:21.977 19:15:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:21.977 19:15:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:21.977 19:15:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:21.977 19:15:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:21.977 19:15:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:21.977 19:15:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:21.977 19:15:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:21.977 19:15:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.977 19:15:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:21.977 19:15:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.237 19:15:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:22.237 "name": "raid_bdev1", 00:17:22.237 "uuid": "364454d3-48c4-4179-84ca-46b8ca9fcf27", 00:17:22.237 "strip_size_kb": 64, 00:17:22.237 "state": "online", 00:17:22.237 "raid_level": "raid5f", 00:17:22.237 "superblock": true, 00:17:22.237 "num_base_bdevs": 4, 00:17:22.237 "num_base_bdevs_discovered": 3, 00:17:22.237 "num_base_bdevs_operational": 3, 00:17:22.237 "base_bdevs_list": [ 00:17:22.237 { 00:17:22.237 "name": null, 00:17:22.237 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:22.237 "is_configured": false, 00:17:22.237 "data_offset": 2048, 00:17:22.237 "data_size": 63488 00:17:22.237 }, 00:17:22.237 { 00:17:22.237 "name": "pt2", 00:17:22.237 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:22.237 "is_configured": true, 00:17:22.237 "data_offset": 2048, 00:17:22.237 "data_size": 63488 00:17:22.237 }, 00:17:22.238 { 00:17:22.238 "name": "pt3", 00:17:22.238 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:22.238 "is_configured": true, 00:17:22.238 "data_offset": 2048, 00:17:22.238 "data_size": 63488 00:17:22.238 }, 00:17:22.238 { 00:17:22.238 "name": "pt4", 00:17:22.238 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:22.238 "is_configured": true, 00:17:22.238 "data_offset": 2048, 00:17:22.238 "data_size": 63488 00:17:22.238 } 00:17:22.238 ] 00:17:22.238 }' 00:17:22.238 19:15:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:22.238 19:15:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:22.497 19:15:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:17:22.497 19:15:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:17:22.497 19:15:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.497 19:15:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:22.497 19:15:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.498 19:15:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:17:22.498 19:15:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:22.498 19:15:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.498 19:15:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:22.498 19:15:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:17:22.498 [2024-11-27 19:15:32.118212] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:22.498 19:15:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.757 19:15:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 364454d3-48c4-4179-84ca-46b8ca9fcf27 '!=' 364454d3-48c4-4179-84ca-46b8ca9fcf27 ']' 00:17:22.757 19:15:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 84180 00:17:22.757 19:15:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 84180 ']' 00:17:22.757 19:15:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # kill -0 84180 00:17:22.757 19:15:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # uname 00:17:22.757 19:15:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:22.757 19:15:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84180 00:17:22.757 19:15:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:22.757 19:15:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:22.757 19:15:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84180' 00:17:22.757 killing process with pid 84180 00:17:22.757 19:15:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@973 -- # kill 84180 00:17:22.757 [2024-11-27 19:15:32.204289] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:22.757 [2024-11-27 19:15:32.204405] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:22.757 19:15:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@978 -- # wait 84180 00:17:22.757 [2024-11-27 19:15:32.204494] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:22.757 [2024-11-27 19:15:32.204514] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:17:23.017 [2024-11-27 19:15:32.626481] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:24.399 ************************************ 00:17:24.399 END TEST raid5f_superblock_test 00:17:24.399 ************************************ 00:17:24.399 19:15:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:17:24.399 00:17:24.399 real 0m8.534s 00:17:24.399 user 0m13.136s 00:17:24.399 sys 0m1.731s 00:17:24.399 19:15:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:24.399 19:15:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:24.399 19:15:33 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:17:24.399 19:15:33 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 4 false false true 00:17:24.399 19:15:33 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:17:24.399 19:15:33 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:24.399 19:15:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:24.399 ************************************ 00:17:24.399 START TEST raid5f_rebuild_test 00:17:24.399 ************************************ 00:17:24.399 19:15:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 4 false false true 00:17:24.399 19:15:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:17:24.399 19:15:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:17:24.399 19:15:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:17:24.399 19:15:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:17:24.399 19:15:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:17:24.399 19:15:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:17:24.399 19:15:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:24.399 19:15:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:17:24.399 19:15:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:24.399 19:15:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:24.399 19:15:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:17:24.399 19:15:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:24.399 19:15:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:24.399 19:15:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:17:24.399 19:15:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:24.399 19:15:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:24.399 19:15:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:17:24.399 19:15:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:24.399 19:15:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:24.399 19:15:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:17:24.399 19:15:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:17:24.399 19:15:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:17:24.399 19:15:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:17:24.399 19:15:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:17:24.399 19:15:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:17:24.399 19:15:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:17:24.399 19:15:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:17:24.399 19:15:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:17:24.399 19:15:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:17:24.399 19:15:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:17:24.399 19:15:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:17:24.399 19:15:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=84670 00:17:24.399 19:15:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:24.399 19:15:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 84670 00:17:24.399 19:15:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 84670 ']' 00:17:24.399 19:15:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:24.399 19:15:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:24.399 19:15:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:24.399 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:24.400 19:15:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:24.400 19:15:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:24.400 [2024-11-27 19:15:34.000981] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:17:24.400 [2024-11-27 19:15:34.001187] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:17:24.400 Zero copy mechanism will not be used. 00:17:24.400 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84670 ] 00:17:24.660 [2024-11-27 19:15:34.173662] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:24.919 [2024-11-27 19:15:34.312612] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:24.919 [2024-11-27 19:15:34.550662] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:24.919 [2024-11-27 19:15:34.550797] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:25.488 19:15:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:25.488 19:15:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:17:25.488 19:15:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:25.488 19:15:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:25.488 19:15:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.488 19:15:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:25.488 BaseBdev1_malloc 00:17:25.488 19:15:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.488 19:15:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:25.488 19:15:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.488 19:15:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:25.488 [2024-11-27 19:15:34.877092] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:25.488 [2024-11-27 19:15:34.877220] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:25.488 [2024-11-27 19:15:34.877265] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:25.488 [2024-11-27 19:15:34.877299] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:25.488 [2024-11-27 19:15:34.879685] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:25.488 [2024-11-27 19:15:34.879815] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:25.488 BaseBdev1 00:17:25.488 19:15:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.488 19:15:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:25.488 19:15:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:25.488 19:15:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.488 19:15:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:25.488 BaseBdev2_malloc 00:17:25.488 19:15:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.488 19:15:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:17:25.488 19:15:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.488 19:15:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:25.488 [2024-11-27 19:15:34.937792] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:17:25.488 [2024-11-27 19:15:34.937855] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:25.488 [2024-11-27 19:15:34.937879] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:25.488 [2024-11-27 19:15:34.937890] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:25.488 [2024-11-27 19:15:34.940297] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:25.488 [2024-11-27 19:15:34.940380] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:25.488 BaseBdev2 00:17:25.488 19:15:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.488 19:15:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:25.488 19:15:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:17:25.488 19:15:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.488 19:15:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:25.488 BaseBdev3_malloc 00:17:25.488 19:15:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.488 19:15:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:17:25.488 19:15:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.488 19:15:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:25.488 [2024-11-27 19:15:35.010453] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:17:25.488 [2024-11-27 19:15:35.010508] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:25.488 [2024-11-27 19:15:35.010528] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:25.488 [2024-11-27 19:15:35.010540] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:25.488 [2024-11-27 19:15:35.012966] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:25.488 [2024-11-27 19:15:35.013006] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:17:25.488 BaseBdev3 00:17:25.488 19:15:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.488 19:15:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:25.488 19:15:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:17:25.488 19:15:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.489 19:15:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:25.489 BaseBdev4_malloc 00:17:25.489 19:15:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.489 19:15:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:17:25.489 19:15:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.489 19:15:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:25.489 [2024-11-27 19:15:35.072241] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:17:25.489 [2024-11-27 19:15:35.072353] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:25.489 [2024-11-27 19:15:35.072382] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:17:25.489 [2024-11-27 19:15:35.072395] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:25.489 [2024-11-27 19:15:35.074800] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:25.489 [2024-11-27 19:15:35.074838] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:17:25.489 BaseBdev4 00:17:25.489 19:15:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.489 19:15:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:17:25.489 19:15:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.489 19:15:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:25.750 spare_malloc 00:17:25.750 19:15:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.750 19:15:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:17:25.750 19:15:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.750 19:15:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:25.750 spare_delay 00:17:25.750 19:15:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.750 19:15:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:25.750 19:15:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.750 19:15:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:25.750 [2024-11-27 19:15:35.144491] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:25.750 [2024-11-27 19:15:35.144544] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:25.750 [2024-11-27 19:15:35.144561] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:17:25.750 [2024-11-27 19:15:35.144572] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:25.750 [2024-11-27 19:15:35.146953] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:25.750 [2024-11-27 19:15:35.146988] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:25.750 spare 00:17:25.750 19:15:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.750 19:15:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:17:25.750 19:15:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.750 19:15:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:25.750 [2024-11-27 19:15:35.156525] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:25.750 [2024-11-27 19:15:35.158647] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:25.750 [2024-11-27 19:15:35.158719] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:25.750 [2024-11-27 19:15:35.158769] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:25.750 [2024-11-27 19:15:35.158871] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:25.750 [2024-11-27 19:15:35.158887] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:17:25.750 [2024-11-27 19:15:35.159164] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:25.750 [2024-11-27 19:15:35.166563] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:25.750 [2024-11-27 19:15:35.166618] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:25.750 [2024-11-27 19:15:35.166871] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:25.750 19:15:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.750 19:15:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:17:25.750 19:15:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:25.750 19:15:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:25.750 19:15:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:25.750 19:15:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:25.750 19:15:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:25.750 19:15:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:25.750 19:15:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:25.750 19:15:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:25.750 19:15:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:25.750 19:15:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:25.750 19:15:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:25.750 19:15:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.750 19:15:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:25.750 19:15:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.750 19:15:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:25.750 "name": "raid_bdev1", 00:17:25.750 "uuid": "7a4b2c1b-35e8-4d88-9956-3899417fb381", 00:17:25.750 "strip_size_kb": 64, 00:17:25.750 "state": "online", 00:17:25.750 "raid_level": "raid5f", 00:17:25.750 "superblock": false, 00:17:25.750 "num_base_bdevs": 4, 00:17:25.750 "num_base_bdevs_discovered": 4, 00:17:25.750 "num_base_bdevs_operational": 4, 00:17:25.750 "base_bdevs_list": [ 00:17:25.750 { 00:17:25.750 "name": "BaseBdev1", 00:17:25.750 "uuid": "b0c2381b-1513-550e-9458-5f48a14e1396", 00:17:25.750 "is_configured": true, 00:17:25.750 "data_offset": 0, 00:17:25.750 "data_size": 65536 00:17:25.750 }, 00:17:25.750 { 00:17:25.750 "name": "BaseBdev2", 00:17:25.750 "uuid": "604b08e0-8025-524f-9ca6-edd9c1bf2539", 00:17:25.750 "is_configured": true, 00:17:25.750 "data_offset": 0, 00:17:25.750 "data_size": 65536 00:17:25.750 }, 00:17:25.750 { 00:17:25.750 "name": "BaseBdev3", 00:17:25.750 "uuid": "0af2309a-e004-5b9b-bc98-c4abccb6d613", 00:17:25.750 "is_configured": true, 00:17:25.750 "data_offset": 0, 00:17:25.750 "data_size": 65536 00:17:25.750 }, 00:17:25.750 { 00:17:25.750 "name": "BaseBdev4", 00:17:25.750 "uuid": "fb9adb81-40f7-505e-b65c-530d44026541", 00:17:25.750 "is_configured": true, 00:17:25.750 "data_offset": 0, 00:17:25.750 "data_size": 65536 00:17:25.750 } 00:17:25.750 ] 00:17:25.750 }' 00:17:25.750 19:15:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:25.750 19:15:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:26.010 19:15:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:26.010 19:15:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.010 19:15:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:17:26.010 19:15:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:26.010 [2024-11-27 19:15:35.631639] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:26.269 19:15:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.269 19:15:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=196608 00:17:26.269 19:15:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:26.269 19:15:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.269 19:15:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:26.270 19:15:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:17:26.270 19:15:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.270 19:15:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:17:26.270 19:15:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:17:26.270 19:15:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:17:26.270 19:15:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:17:26.270 19:15:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:17:26.270 19:15:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:26.270 19:15:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:17:26.270 19:15:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:26.270 19:15:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:17:26.270 19:15:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:26.270 19:15:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:17:26.270 19:15:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:26.270 19:15:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:26.270 19:15:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:17:26.530 [2024-11-27 19:15:35.910996] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:17:26.530 /dev/nbd0 00:17:26.530 19:15:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:26.530 19:15:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:26.530 19:15:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:26.530 19:15:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:17:26.530 19:15:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:26.530 19:15:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:26.530 19:15:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:26.530 19:15:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:17:26.530 19:15:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:26.530 19:15:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:26.530 19:15:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:26.530 1+0 records in 00:17:26.530 1+0 records out 00:17:26.530 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000379004 s, 10.8 MB/s 00:17:26.530 19:15:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:26.530 19:15:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:17:26.530 19:15:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:26.530 19:15:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:26.530 19:15:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:17:26.530 19:15:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:26.530 19:15:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:26.530 19:15:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:17:26.530 19:15:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:17:26.530 19:15:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 192 00:17:26.530 19:15:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=512 oflag=direct 00:17:27.101 512+0 records in 00:17:27.101 512+0 records out 00:17:27.101 100663296 bytes (101 MB, 96 MiB) copied, 0.46241 s, 218 MB/s 00:17:27.101 19:15:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:17:27.101 19:15:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:27.101 19:15:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:27.101 19:15:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:27.101 19:15:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:17:27.101 19:15:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:27.101 19:15:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:27.101 19:15:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:27.101 19:15:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:27.101 19:15:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:27.101 [2024-11-27 19:15:36.664345] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:27.101 19:15:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:27.101 19:15:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:27.101 19:15:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:27.101 19:15:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:17:27.102 19:15:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:17:27.102 19:15:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:17:27.102 19:15:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.102 19:15:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:27.102 [2024-11-27 19:15:36.676960] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:27.102 19:15:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.102 19:15:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:27.102 19:15:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:27.102 19:15:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:27.102 19:15:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:27.102 19:15:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:27.102 19:15:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:27.102 19:15:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:27.102 19:15:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:27.102 19:15:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:27.102 19:15:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:27.102 19:15:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:27.102 19:15:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.102 19:15:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:27.102 19:15:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:27.102 19:15:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.102 19:15:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:27.102 "name": "raid_bdev1", 00:17:27.102 "uuid": "7a4b2c1b-35e8-4d88-9956-3899417fb381", 00:17:27.102 "strip_size_kb": 64, 00:17:27.102 "state": "online", 00:17:27.102 "raid_level": "raid5f", 00:17:27.102 "superblock": false, 00:17:27.102 "num_base_bdevs": 4, 00:17:27.102 "num_base_bdevs_discovered": 3, 00:17:27.102 "num_base_bdevs_operational": 3, 00:17:27.102 "base_bdevs_list": [ 00:17:27.102 { 00:17:27.102 "name": null, 00:17:27.102 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:27.102 "is_configured": false, 00:17:27.102 "data_offset": 0, 00:17:27.102 "data_size": 65536 00:17:27.102 }, 00:17:27.102 { 00:17:27.102 "name": "BaseBdev2", 00:17:27.102 "uuid": "604b08e0-8025-524f-9ca6-edd9c1bf2539", 00:17:27.102 "is_configured": true, 00:17:27.102 "data_offset": 0, 00:17:27.102 "data_size": 65536 00:17:27.102 }, 00:17:27.102 { 00:17:27.102 "name": "BaseBdev3", 00:17:27.102 "uuid": "0af2309a-e004-5b9b-bc98-c4abccb6d613", 00:17:27.102 "is_configured": true, 00:17:27.102 "data_offset": 0, 00:17:27.102 "data_size": 65536 00:17:27.102 }, 00:17:27.102 { 00:17:27.102 "name": "BaseBdev4", 00:17:27.102 "uuid": "fb9adb81-40f7-505e-b65c-530d44026541", 00:17:27.102 "is_configured": true, 00:17:27.102 "data_offset": 0, 00:17:27.102 "data_size": 65536 00:17:27.102 } 00:17:27.102 ] 00:17:27.102 }' 00:17:27.102 19:15:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:27.102 19:15:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:27.673 19:15:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:27.673 19:15:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.673 19:15:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:27.673 [2024-11-27 19:15:37.096237] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:27.673 [2024-11-27 19:15:37.112786] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:17:27.673 19:15:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.673 19:15:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:17:27.673 [2024-11-27 19:15:37.122109] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:28.651 19:15:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:28.651 19:15:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:28.651 19:15:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:28.651 19:15:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:28.651 19:15:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:28.651 19:15:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:28.651 19:15:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:28.651 19:15:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.651 19:15:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:28.651 19:15:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.651 19:15:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:28.651 "name": "raid_bdev1", 00:17:28.651 "uuid": "7a4b2c1b-35e8-4d88-9956-3899417fb381", 00:17:28.651 "strip_size_kb": 64, 00:17:28.651 "state": "online", 00:17:28.651 "raid_level": "raid5f", 00:17:28.651 "superblock": false, 00:17:28.651 "num_base_bdevs": 4, 00:17:28.651 "num_base_bdevs_discovered": 4, 00:17:28.651 "num_base_bdevs_operational": 4, 00:17:28.651 "process": { 00:17:28.651 "type": "rebuild", 00:17:28.651 "target": "spare", 00:17:28.651 "progress": { 00:17:28.651 "blocks": 19200, 00:17:28.651 "percent": 9 00:17:28.651 } 00:17:28.651 }, 00:17:28.651 "base_bdevs_list": [ 00:17:28.651 { 00:17:28.651 "name": "spare", 00:17:28.651 "uuid": "6c3a812f-d080-56bf-a3ec-b3b9355de6ee", 00:17:28.651 "is_configured": true, 00:17:28.651 "data_offset": 0, 00:17:28.651 "data_size": 65536 00:17:28.651 }, 00:17:28.651 { 00:17:28.651 "name": "BaseBdev2", 00:17:28.651 "uuid": "604b08e0-8025-524f-9ca6-edd9c1bf2539", 00:17:28.651 "is_configured": true, 00:17:28.651 "data_offset": 0, 00:17:28.651 "data_size": 65536 00:17:28.651 }, 00:17:28.651 { 00:17:28.651 "name": "BaseBdev3", 00:17:28.651 "uuid": "0af2309a-e004-5b9b-bc98-c4abccb6d613", 00:17:28.651 "is_configured": true, 00:17:28.651 "data_offset": 0, 00:17:28.651 "data_size": 65536 00:17:28.651 }, 00:17:28.651 { 00:17:28.651 "name": "BaseBdev4", 00:17:28.651 "uuid": "fb9adb81-40f7-505e-b65c-530d44026541", 00:17:28.651 "is_configured": true, 00:17:28.651 "data_offset": 0, 00:17:28.651 "data_size": 65536 00:17:28.651 } 00:17:28.651 ] 00:17:28.651 }' 00:17:28.651 19:15:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:28.651 19:15:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:28.651 19:15:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:28.651 19:15:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:28.652 19:15:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:28.652 19:15:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.652 19:15:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:28.652 [2024-11-27 19:15:38.264788] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:28.911 [2024-11-27 19:15:38.327674] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:28.911 [2024-11-27 19:15:38.327823] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:28.911 [2024-11-27 19:15:38.327862] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:28.911 [2024-11-27 19:15:38.327890] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:28.911 19:15:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.911 19:15:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:28.911 19:15:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:28.911 19:15:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:28.911 19:15:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:28.911 19:15:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:28.911 19:15:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:28.911 19:15:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:28.911 19:15:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:28.911 19:15:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:28.911 19:15:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:28.911 19:15:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:28.911 19:15:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:28.911 19:15:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.911 19:15:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:28.911 19:15:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.911 19:15:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:28.911 "name": "raid_bdev1", 00:17:28.911 "uuid": "7a4b2c1b-35e8-4d88-9956-3899417fb381", 00:17:28.911 "strip_size_kb": 64, 00:17:28.911 "state": "online", 00:17:28.911 "raid_level": "raid5f", 00:17:28.911 "superblock": false, 00:17:28.911 "num_base_bdevs": 4, 00:17:28.911 "num_base_bdevs_discovered": 3, 00:17:28.911 "num_base_bdevs_operational": 3, 00:17:28.911 "base_bdevs_list": [ 00:17:28.911 { 00:17:28.911 "name": null, 00:17:28.911 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:28.911 "is_configured": false, 00:17:28.911 "data_offset": 0, 00:17:28.911 "data_size": 65536 00:17:28.911 }, 00:17:28.911 { 00:17:28.911 "name": "BaseBdev2", 00:17:28.911 "uuid": "604b08e0-8025-524f-9ca6-edd9c1bf2539", 00:17:28.911 "is_configured": true, 00:17:28.911 "data_offset": 0, 00:17:28.911 "data_size": 65536 00:17:28.911 }, 00:17:28.911 { 00:17:28.911 "name": "BaseBdev3", 00:17:28.911 "uuid": "0af2309a-e004-5b9b-bc98-c4abccb6d613", 00:17:28.911 "is_configured": true, 00:17:28.911 "data_offset": 0, 00:17:28.911 "data_size": 65536 00:17:28.911 }, 00:17:28.911 { 00:17:28.911 "name": "BaseBdev4", 00:17:28.911 "uuid": "fb9adb81-40f7-505e-b65c-530d44026541", 00:17:28.911 "is_configured": true, 00:17:28.911 "data_offset": 0, 00:17:28.911 "data_size": 65536 00:17:28.911 } 00:17:28.911 ] 00:17:28.911 }' 00:17:28.911 19:15:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:28.911 19:15:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:29.480 19:15:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:29.480 19:15:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:29.480 19:15:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:29.480 19:15:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:29.480 19:15:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:29.480 19:15:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:29.480 19:15:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:29.480 19:15:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.480 19:15:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:29.480 19:15:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.480 19:15:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:29.480 "name": "raid_bdev1", 00:17:29.480 "uuid": "7a4b2c1b-35e8-4d88-9956-3899417fb381", 00:17:29.480 "strip_size_kb": 64, 00:17:29.480 "state": "online", 00:17:29.480 "raid_level": "raid5f", 00:17:29.480 "superblock": false, 00:17:29.480 "num_base_bdevs": 4, 00:17:29.480 "num_base_bdevs_discovered": 3, 00:17:29.480 "num_base_bdevs_operational": 3, 00:17:29.480 "base_bdevs_list": [ 00:17:29.480 { 00:17:29.480 "name": null, 00:17:29.480 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:29.480 "is_configured": false, 00:17:29.480 "data_offset": 0, 00:17:29.480 "data_size": 65536 00:17:29.480 }, 00:17:29.480 { 00:17:29.480 "name": "BaseBdev2", 00:17:29.480 "uuid": "604b08e0-8025-524f-9ca6-edd9c1bf2539", 00:17:29.480 "is_configured": true, 00:17:29.480 "data_offset": 0, 00:17:29.480 "data_size": 65536 00:17:29.480 }, 00:17:29.480 { 00:17:29.480 "name": "BaseBdev3", 00:17:29.480 "uuid": "0af2309a-e004-5b9b-bc98-c4abccb6d613", 00:17:29.480 "is_configured": true, 00:17:29.480 "data_offset": 0, 00:17:29.480 "data_size": 65536 00:17:29.480 }, 00:17:29.480 { 00:17:29.480 "name": "BaseBdev4", 00:17:29.480 "uuid": "fb9adb81-40f7-505e-b65c-530d44026541", 00:17:29.480 "is_configured": true, 00:17:29.480 "data_offset": 0, 00:17:29.480 "data_size": 65536 00:17:29.480 } 00:17:29.480 ] 00:17:29.480 }' 00:17:29.480 19:15:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:29.480 19:15:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:29.480 19:15:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:29.480 19:15:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:29.480 19:15:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:29.480 19:15:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.480 19:15:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:29.480 [2024-11-27 19:15:38.955545] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:29.480 [2024-11-27 19:15:38.970215] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b820 00:17:29.480 19:15:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.480 19:15:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:17:29.480 [2024-11-27 19:15:38.979260] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:30.418 19:15:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:30.418 19:15:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:30.418 19:15:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:30.419 19:15:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:30.419 19:15:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:30.419 19:15:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:30.419 19:15:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.419 19:15:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:30.419 19:15:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:30.419 19:15:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.419 19:15:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:30.419 "name": "raid_bdev1", 00:17:30.419 "uuid": "7a4b2c1b-35e8-4d88-9956-3899417fb381", 00:17:30.419 "strip_size_kb": 64, 00:17:30.419 "state": "online", 00:17:30.419 "raid_level": "raid5f", 00:17:30.419 "superblock": false, 00:17:30.419 "num_base_bdevs": 4, 00:17:30.419 "num_base_bdevs_discovered": 4, 00:17:30.419 "num_base_bdevs_operational": 4, 00:17:30.419 "process": { 00:17:30.419 "type": "rebuild", 00:17:30.419 "target": "spare", 00:17:30.419 "progress": { 00:17:30.419 "blocks": 19200, 00:17:30.419 "percent": 9 00:17:30.419 } 00:17:30.419 }, 00:17:30.419 "base_bdevs_list": [ 00:17:30.419 { 00:17:30.419 "name": "spare", 00:17:30.419 "uuid": "6c3a812f-d080-56bf-a3ec-b3b9355de6ee", 00:17:30.419 "is_configured": true, 00:17:30.419 "data_offset": 0, 00:17:30.419 "data_size": 65536 00:17:30.419 }, 00:17:30.419 { 00:17:30.419 "name": "BaseBdev2", 00:17:30.419 "uuid": "604b08e0-8025-524f-9ca6-edd9c1bf2539", 00:17:30.419 "is_configured": true, 00:17:30.419 "data_offset": 0, 00:17:30.419 "data_size": 65536 00:17:30.419 }, 00:17:30.419 { 00:17:30.419 "name": "BaseBdev3", 00:17:30.419 "uuid": "0af2309a-e004-5b9b-bc98-c4abccb6d613", 00:17:30.419 "is_configured": true, 00:17:30.419 "data_offset": 0, 00:17:30.419 "data_size": 65536 00:17:30.419 }, 00:17:30.419 { 00:17:30.419 "name": "BaseBdev4", 00:17:30.419 "uuid": "fb9adb81-40f7-505e-b65c-530d44026541", 00:17:30.419 "is_configured": true, 00:17:30.419 "data_offset": 0, 00:17:30.419 "data_size": 65536 00:17:30.419 } 00:17:30.419 ] 00:17:30.419 }' 00:17:30.419 19:15:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:30.679 19:15:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:30.679 19:15:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:30.679 19:15:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:30.679 19:15:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:17:30.679 19:15:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:17:30.679 19:15:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:17:30.679 19:15:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=622 00:17:30.679 19:15:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:30.679 19:15:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:30.679 19:15:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:30.679 19:15:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:30.679 19:15:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:30.679 19:15:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:30.679 19:15:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:30.679 19:15:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:30.679 19:15:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.679 19:15:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:30.679 19:15:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.679 19:15:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:30.679 "name": "raid_bdev1", 00:17:30.679 "uuid": "7a4b2c1b-35e8-4d88-9956-3899417fb381", 00:17:30.679 "strip_size_kb": 64, 00:17:30.679 "state": "online", 00:17:30.679 "raid_level": "raid5f", 00:17:30.679 "superblock": false, 00:17:30.679 "num_base_bdevs": 4, 00:17:30.679 "num_base_bdevs_discovered": 4, 00:17:30.679 "num_base_bdevs_operational": 4, 00:17:30.679 "process": { 00:17:30.679 "type": "rebuild", 00:17:30.679 "target": "spare", 00:17:30.679 "progress": { 00:17:30.679 "blocks": 21120, 00:17:30.679 "percent": 10 00:17:30.679 } 00:17:30.679 }, 00:17:30.679 "base_bdevs_list": [ 00:17:30.679 { 00:17:30.679 "name": "spare", 00:17:30.679 "uuid": "6c3a812f-d080-56bf-a3ec-b3b9355de6ee", 00:17:30.679 "is_configured": true, 00:17:30.679 "data_offset": 0, 00:17:30.679 "data_size": 65536 00:17:30.679 }, 00:17:30.679 { 00:17:30.679 "name": "BaseBdev2", 00:17:30.679 "uuid": "604b08e0-8025-524f-9ca6-edd9c1bf2539", 00:17:30.679 "is_configured": true, 00:17:30.679 "data_offset": 0, 00:17:30.679 "data_size": 65536 00:17:30.679 }, 00:17:30.679 { 00:17:30.679 "name": "BaseBdev3", 00:17:30.679 "uuid": "0af2309a-e004-5b9b-bc98-c4abccb6d613", 00:17:30.679 "is_configured": true, 00:17:30.679 "data_offset": 0, 00:17:30.679 "data_size": 65536 00:17:30.679 }, 00:17:30.679 { 00:17:30.679 "name": "BaseBdev4", 00:17:30.679 "uuid": "fb9adb81-40f7-505e-b65c-530d44026541", 00:17:30.679 "is_configured": true, 00:17:30.679 "data_offset": 0, 00:17:30.679 "data_size": 65536 00:17:30.679 } 00:17:30.679 ] 00:17:30.679 }' 00:17:30.679 19:15:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:30.679 19:15:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:30.679 19:15:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:30.679 19:15:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:30.679 19:15:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:32.061 19:15:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:32.061 19:15:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:32.061 19:15:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:32.061 19:15:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:32.061 19:15:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:32.061 19:15:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:32.061 19:15:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:32.061 19:15:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:32.061 19:15:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.061 19:15:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:32.061 19:15:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.061 19:15:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:32.061 "name": "raid_bdev1", 00:17:32.061 "uuid": "7a4b2c1b-35e8-4d88-9956-3899417fb381", 00:17:32.061 "strip_size_kb": 64, 00:17:32.061 "state": "online", 00:17:32.061 "raid_level": "raid5f", 00:17:32.061 "superblock": false, 00:17:32.061 "num_base_bdevs": 4, 00:17:32.061 "num_base_bdevs_discovered": 4, 00:17:32.061 "num_base_bdevs_operational": 4, 00:17:32.061 "process": { 00:17:32.061 "type": "rebuild", 00:17:32.061 "target": "spare", 00:17:32.061 "progress": { 00:17:32.061 "blocks": 42240, 00:17:32.061 "percent": 21 00:17:32.061 } 00:17:32.061 }, 00:17:32.061 "base_bdevs_list": [ 00:17:32.061 { 00:17:32.061 "name": "spare", 00:17:32.061 "uuid": "6c3a812f-d080-56bf-a3ec-b3b9355de6ee", 00:17:32.061 "is_configured": true, 00:17:32.061 "data_offset": 0, 00:17:32.061 "data_size": 65536 00:17:32.061 }, 00:17:32.061 { 00:17:32.061 "name": "BaseBdev2", 00:17:32.061 "uuid": "604b08e0-8025-524f-9ca6-edd9c1bf2539", 00:17:32.061 "is_configured": true, 00:17:32.061 "data_offset": 0, 00:17:32.061 "data_size": 65536 00:17:32.061 }, 00:17:32.061 { 00:17:32.061 "name": "BaseBdev3", 00:17:32.061 "uuid": "0af2309a-e004-5b9b-bc98-c4abccb6d613", 00:17:32.061 "is_configured": true, 00:17:32.061 "data_offset": 0, 00:17:32.061 "data_size": 65536 00:17:32.061 }, 00:17:32.061 { 00:17:32.061 "name": "BaseBdev4", 00:17:32.061 "uuid": "fb9adb81-40f7-505e-b65c-530d44026541", 00:17:32.061 "is_configured": true, 00:17:32.061 "data_offset": 0, 00:17:32.062 "data_size": 65536 00:17:32.062 } 00:17:32.062 ] 00:17:32.062 }' 00:17:32.062 19:15:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:32.062 19:15:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:32.062 19:15:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:32.062 19:15:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:32.062 19:15:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:33.003 19:15:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:33.003 19:15:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:33.003 19:15:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:33.003 19:15:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:33.003 19:15:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:33.003 19:15:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:33.003 19:15:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:33.003 19:15:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:33.003 19:15:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.003 19:15:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:33.003 19:15:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.003 19:15:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:33.003 "name": "raid_bdev1", 00:17:33.003 "uuid": "7a4b2c1b-35e8-4d88-9956-3899417fb381", 00:17:33.003 "strip_size_kb": 64, 00:17:33.003 "state": "online", 00:17:33.003 "raid_level": "raid5f", 00:17:33.003 "superblock": false, 00:17:33.003 "num_base_bdevs": 4, 00:17:33.003 "num_base_bdevs_discovered": 4, 00:17:33.003 "num_base_bdevs_operational": 4, 00:17:33.003 "process": { 00:17:33.003 "type": "rebuild", 00:17:33.003 "target": "spare", 00:17:33.003 "progress": { 00:17:33.003 "blocks": 65280, 00:17:33.003 "percent": 33 00:17:33.003 } 00:17:33.003 }, 00:17:33.003 "base_bdevs_list": [ 00:17:33.003 { 00:17:33.003 "name": "spare", 00:17:33.003 "uuid": "6c3a812f-d080-56bf-a3ec-b3b9355de6ee", 00:17:33.003 "is_configured": true, 00:17:33.003 "data_offset": 0, 00:17:33.003 "data_size": 65536 00:17:33.003 }, 00:17:33.003 { 00:17:33.003 "name": "BaseBdev2", 00:17:33.003 "uuid": "604b08e0-8025-524f-9ca6-edd9c1bf2539", 00:17:33.003 "is_configured": true, 00:17:33.003 "data_offset": 0, 00:17:33.003 "data_size": 65536 00:17:33.003 }, 00:17:33.003 { 00:17:33.003 "name": "BaseBdev3", 00:17:33.003 "uuid": "0af2309a-e004-5b9b-bc98-c4abccb6d613", 00:17:33.003 "is_configured": true, 00:17:33.003 "data_offset": 0, 00:17:33.003 "data_size": 65536 00:17:33.003 }, 00:17:33.003 { 00:17:33.003 "name": "BaseBdev4", 00:17:33.003 "uuid": "fb9adb81-40f7-505e-b65c-530d44026541", 00:17:33.003 "is_configured": true, 00:17:33.003 "data_offset": 0, 00:17:33.004 "data_size": 65536 00:17:33.004 } 00:17:33.004 ] 00:17:33.004 }' 00:17:33.004 19:15:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:33.004 19:15:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:33.004 19:15:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:33.004 19:15:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:33.004 19:15:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:34.455 19:15:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:34.455 19:15:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:34.455 19:15:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:34.455 19:15:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:34.455 19:15:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:34.455 19:15:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:34.455 19:15:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:34.455 19:15:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:34.455 19:15:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.455 19:15:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:34.455 19:15:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.455 19:15:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:34.455 "name": "raid_bdev1", 00:17:34.455 "uuid": "7a4b2c1b-35e8-4d88-9956-3899417fb381", 00:17:34.455 "strip_size_kb": 64, 00:17:34.455 "state": "online", 00:17:34.455 "raid_level": "raid5f", 00:17:34.455 "superblock": false, 00:17:34.455 "num_base_bdevs": 4, 00:17:34.455 "num_base_bdevs_discovered": 4, 00:17:34.455 "num_base_bdevs_operational": 4, 00:17:34.455 "process": { 00:17:34.455 "type": "rebuild", 00:17:34.455 "target": "spare", 00:17:34.455 "progress": { 00:17:34.455 "blocks": 86400, 00:17:34.455 "percent": 43 00:17:34.455 } 00:17:34.455 }, 00:17:34.455 "base_bdevs_list": [ 00:17:34.455 { 00:17:34.455 "name": "spare", 00:17:34.455 "uuid": "6c3a812f-d080-56bf-a3ec-b3b9355de6ee", 00:17:34.455 "is_configured": true, 00:17:34.455 "data_offset": 0, 00:17:34.455 "data_size": 65536 00:17:34.455 }, 00:17:34.455 { 00:17:34.455 "name": "BaseBdev2", 00:17:34.455 "uuid": "604b08e0-8025-524f-9ca6-edd9c1bf2539", 00:17:34.455 "is_configured": true, 00:17:34.455 "data_offset": 0, 00:17:34.455 "data_size": 65536 00:17:34.455 }, 00:17:34.455 { 00:17:34.455 "name": "BaseBdev3", 00:17:34.455 "uuid": "0af2309a-e004-5b9b-bc98-c4abccb6d613", 00:17:34.455 "is_configured": true, 00:17:34.455 "data_offset": 0, 00:17:34.455 "data_size": 65536 00:17:34.455 }, 00:17:34.455 { 00:17:34.455 "name": "BaseBdev4", 00:17:34.455 "uuid": "fb9adb81-40f7-505e-b65c-530d44026541", 00:17:34.455 "is_configured": true, 00:17:34.455 "data_offset": 0, 00:17:34.455 "data_size": 65536 00:17:34.455 } 00:17:34.455 ] 00:17:34.455 }' 00:17:34.455 19:15:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:34.455 19:15:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:34.455 19:15:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:34.455 19:15:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:34.455 19:15:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:35.422 19:15:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:35.422 19:15:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:35.422 19:15:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:35.422 19:15:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:35.422 19:15:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:35.422 19:15:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:35.422 19:15:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:35.422 19:15:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.422 19:15:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:35.422 19:15:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:35.422 19:15:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.422 19:15:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:35.422 "name": "raid_bdev1", 00:17:35.422 "uuid": "7a4b2c1b-35e8-4d88-9956-3899417fb381", 00:17:35.422 "strip_size_kb": 64, 00:17:35.422 "state": "online", 00:17:35.422 "raid_level": "raid5f", 00:17:35.422 "superblock": false, 00:17:35.422 "num_base_bdevs": 4, 00:17:35.422 "num_base_bdevs_discovered": 4, 00:17:35.422 "num_base_bdevs_operational": 4, 00:17:35.422 "process": { 00:17:35.422 "type": "rebuild", 00:17:35.422 "target": "spare", 00:17:35.422 "progress": { 00:17:35.422 "blocks": 109440, 00:17:35.422 "percent": 55 00:17:35.422 } 00:17:35.422 }, 00:17:35.422 "base_bdevs_list": [ 00:17:35.422 { 00:17:35.422 "name": "spare", 00:17:35.422 "uuid": "6c3a812f-d080-56bf-a3ec-b3b9355de6ee", 00:17:35.422 "is_configured": true, 00:17:35.423 "data_offset": 0, 00:17:35.423 "data_size": 65536 00:17:35.423 }, 00:17:35.423 { 00:17:35.423 "name": "BaseBdev2", 00:17:35.423 "uuid": "604b08e0-8025-524f-9ca6-edd9c1bf2539", 00:17:35.423 "is_configured": true, 00:17:35.423 "data_offset": 0, 00:17:35.423 "data_size": 65536 00:17:35.423 }, 00:17:35.423 { 00:17:35.423 "name": "BaseBdev3", 00:17:35.423 "uuid": "0af2309a-e004-5b9b-bc98-c4abccb6d613", 00:17:35.423 "is_configured": true, 00:17:35.423 "data_offset": 0, 00:17:35.423 "data_size": 65536 00:17:35.423 }, 00:17:35.423 { 00:17:35.423 "name": "BaseBdev4", 00:17:35.423 "uuid": "fb9adb81-40f7-505e-b65c-530d44026541", 00:17:35.423 "is_configured": true, 00:17:35.423 "data_offset": 0, 00:17:35.423 "data_size": 65536 00:17:35.423 } 00:17:35.423 ] 00:17:35.423 }' 00:17:35.423 19:15:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:35.423 19:15:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:35.423 19:15:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:35.423 19:15:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:35.423 19:15:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:36.363 19:15:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:36.363 19:15:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:36.363 19:15:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:36.363 19:15:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:36.363 19:15:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:36.363 19:15:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:36.363 19:15:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:36.363 19:15:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:36.363 19:15:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.363 19:15:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:36.363 19:15:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.363 19:15:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:36.363 "name": "raid_bdev1", 00:17:36.363 "uuid": "7a4b2c1b-35e8-4d88-9956-3899417fb381", 00:17:36.363 "strip_size_kb": 64, 00:17:36.363 "state": "online", 00:17:36.363 "raid_level": "raid5f", 00:17:36.363 "superblock": false, 00:17:36.363 "num_base_bdevs": 4, 00:17:36.363 "num_base_bdevs_discovered": 4, 00:17:36.363 "num_base_bdevs_operational": 4, 00:17:36.363 "process": { 00:17:36.363 "type": "rebuild", 00:17:36.363 "target": "spare", 00:17:36.363 "progress": { 00:17:36.363 "blocks": 130560, 00:17:36.363 "percent": 66 00:17:36.363 } 00:17:36.363 }, 00:17:36.363 "base_bdevs_list": [ 00:17:36.363 { 00:17:36.363 "name": "spare", 00:17:36.363 "uuid": "6c3a812f-d080-56bf-a3ec-b3b9355de6ee", 00:17:36.363 "is_configured": true, 00:17:36.363 "data_offset": 0, 00:17:36.363 "data_size": 65536 00:17:36.363 }, 00:17:36.363 { 00:17:36.363 "name": "BaseBdev2", 00:17:36.363 "uuid": "604b08e0-8025-524f-9ca6-edd9c1bf2539", 00:17:36.363 "is_configured": true, 00:17:36.363 "data_offset": 0, 00:17:36.363 "data_size": 65536 00:17:36.363 }, 00:17:36.363 { 00:17:36.363 "name": "BaseBdev3", 00:17:36.363 "uuid": "0af2309a-e004-5b9b-bc98-c4abccb6d613", 00:17:36.363 "is_configured": true, 00:17:36.363 "data_offset": 0, 00:17:36.363 "data_size": 65536 00:17:36.363 }, 00:17:36.363 { 00:17:36.363 "name": "BaseBdev4", 00:17:36.363 "uuid": "fb9adb81-40f7-505e-b65c-530d44026541", 00:17:36.363 "is_configured": true, 00:17:36.363 "data_offset": 0, 00:17:36.363 "data_size": 65536 00:17:36.363 } 00:17:36.363 ] 00:17:36.363 }' 00:17:36.363 19:15:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:36.363 19:15:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:36.363 19:15:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:36.623 19:15:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:36.623 19:15:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:37.564 19:15:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:37.564 19:15:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:37.564 19:15:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:37.564 19:15:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:37.564 19:15:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:37.564 19:15:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:37.564 19:15:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:37.564 19:15:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:37.564 19:15:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.564 19:15:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:37.564 19:15:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.564 19:15:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:37.564 "name": "raid_bdev1", 00:17:37.564 "uuid": "7a4b2c1b-35e8-4d88-9956-3899417fb381", 00:17:37.564 "strip_size_kb": 64, 00:17:37.564 "state": "online", 00:17:37.564 "raid_level": "raid5f", 00:17:37.564 "superblock": false, 00:17:37.564 "num_base_bdevs": 4, 00:17:37.564 "num_base_bdevs_discovered": 4, 00:17:37.564 "num_base_bdevs_operational": 4, 00:17:37.564 "process": { 00:17:37.564 "type": "rebuild", 00:17:37.564 "target": "spare", 00:17:37.564 "progress": { 00:17:37.564 "blocks": 153600, 00:17:37.564 "percent": 78 00:17:37.564 } 00:17:37.564 }, 00:17:37.564 "base_bdevs_list": [ 00:17:37.564 { 00:17:37.564 "name": "spare", 00:17:37.564 "uuid": "6c3a812f-d080-56bf-a3ec-b3b9355de6ee", 00:17:37.564 "is_configured": true, 00:17:37.564 "data_offset": 0, 00:17:37.564 "data_size": 65536 00:17:37.564 }, 00:17:37.564 { 00:17:37.564 "name": "BaseBdev2", 00:17:37.564 "uuid": "604b08e0-8025-524f-9ca6-edd9c1bf2539", 00:17:37.564 "is_configured": true, 00:17:37.564 "data_offset": 0, 00:17:37.564 "data_size": 65536 00:17:37.564 }, 00:17:37.564 { 00:17:37.564 "name": "BaseBdev3", 00:17:37.564 "uuid": "0af2309a-e004-5b9b-bc98-c4abccb6d613", 00:17:37.564 "is_configured": true, 00:17:37.564 "data_offset": 0, 00:17:37.564 "data_size": 65536 00:17:37.564 }, 00:17:37.564 { 00:17:37.564 "name": "BaseBdev4", 00:17:37.564 "uuid": "fb9adb81-40f7-505e-b65c-530d44026541", 00:17:37.564 "is_configured": true, 00:17:37.564 "data_offset": 0, 00:17:37.564 "data_size": 65536 00:17:37.564 } 00:17:37.564 ] 00:17:37.564 }' 00:17:37.564 19:15:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:37.564 19:15:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:37.564 19:15:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:37.564 19:15:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:37.564 19:15:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:38.946 19:15:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:38.946 19:15:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:38.946 19:15:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:38.946 19:15:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:38.946 19:15:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:38.946 19:15:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:38.946 19:15:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:38.946 19:15:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:38.946 19:15:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.946 19:15:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:38.946 19:15:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.946 19:15:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:38.946 "name": "raid_bdev1", 00:17:38.946 "uuid": "7a4b2c1b-35e8-4d88-9956-3899417fb381", 00:17:38.946 "strip_size_kb": 64, 00:17:38.946 "state": "online", 00:17:38.946 "raid_level": "raid5f", 00:17:38.946 "superblock": false, 00:17:38.946 "num_base_bdevs": 4, 00:17:38.946 "num_base_bdevs_discovered": 4, 00:17:38.946 "num_base_bdevs_operational": 4, 00:17:38.946 "process": { 00:17:38.946 "type": "rebuild", 00:17:38.946 "target": "spare", 00:17:38.946 "progress": { 00:17:38.946 "blocks": 174720, 00:17:38.946 "percent": 88 00:17:38.946 } 00:17:38.946 }, 00:17:38.946 "base_bdevs_list": [ 00:17:38.946 { 00:17:38.946 "name": "spare", 00:17:38.946 "uuid": "6c3a812f-d080-56bf-a3ec-b3b9355de6ee", 00:17:38.946 "is_configured": true, 00:17:38.946 "data_offset": 0, 00:17:38.946 "data_size": 65536 00:17:38.946 }, 00:17:38.946 { 00:17:38.946 "name": "BaseBdev2", 00:17:38.946 "uuid": "604b08e0-8025-524f-9ca6-edd9c1bf2539", 00:17:38.946 "is_configured": true, 00:17:38.946 "data_offset": 0, 00:17:38.946 "data_size": 65536 00:17:38.946 }, 00:17:38.946 { 00:17:38.946 "name": "BaseBdev3", 00:17:38.946 "uuid": "0af2309a-e004-5b9b-bc98-c4abccb6d613", 00:17:38.946 "is_configured": true, 00:17:38.946 "data_offset": 0, 00:17:38.946 "data_size": 65536 00:17:38.946 }, 00:17:38.946 { 00:17:38.946 "name": "BaseBdev4", 00:17:38.946 "uuid": "fb9adb81-40f7-505e-b65c-530d44026541", 00:17:38.946 "is_configured": true, 00:17:38.946 "data_offset": 0, 00:17:38.946 "data_size": 65536 00:17:38.946 } 00:17:38.946 ] 00:17:38.946 }' 00:17:38.946 19:15:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:38.946 19:15:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:38.946 19:15:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:38.946 19:15:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:38.946 19:15:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:39.886 19:15:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:39.886 19:15:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:39.886 19:15:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:39.886 19:15:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:39.886 19:15:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:39.886 19:15:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:39.886 19:15:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:39.886 19:15:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:39.886 19:15:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.886 19:15:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:39.886 [2024-11-27 19:15:49.321811] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:39.886 [2024-11-27 19:15:49.321925] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:39.886 [2024-11-27 19:15:49.321993] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:39.886 19:15:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.886 19:15:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:39.886 "name": "raid_bdev1", 00:17:39.886 "uuid": "7a4b2c1b-35e8-4d88-9956-3899417fb381", 00:17:39.886 "strip_size_kb": 64, 00:17:39.886 "state": "online", 00:17:39.886 "raid_level": "raid5f", 00:17:39.886 "superblock": false, 00:17:39.886 "num_base_bdevs": 4, 00:17:39.886 "num_base_bdevs_discovered": 4, 00:17:39.886 "num_base_bdevs_operational": 4, 00:17:39.886 "process": { 00:17:39.886 "type": "rebuild", 00:17:39.886 "target": "spare", 00:17:39.886 "progress": { 00:17:39.886 "blocks": 195840, 00:17:39.886 "percent": 99 00:17:39.886 } 00:17:39.886 }, 00:17:39.886 "base_bdevs_list": [ 00:17:39.886 { 00:17:39.886 "name": "spare", 00:17:39.886 "uuid": "6c3a812f-d080-56bf-a3ec-b3b9355de6ee", 00:17:39.886 "is_configured": true, 00:17:39.886 "data_offset": 0, 00:17:39.886 "data_size": 65536 00:17:39.886 }, 00:17:39.886 { 00:17:39.886 "name": "BaseBdev2", 00:17:39.886 "uuid": "604b08e0-8025-524f-9ca6-edd9c1bf2539", 00:17:39.886 "is_configured": true, 00:17:39.886 "data_offset": 0, 00:17:39.886 "data_size": 65536 00:17:39.886 }, 00:17:39.886 { 00:17:39.886 "name": "BaseBdev3", 00:17:39.886 "uuid": "0af2309a-e004-5b9b-bc98-c4abccb6d613", 00:17:39.886 "is_configured": true, 00:17:39.886 "data_offset": 0, 00:17:39.886 "data_size": 65536 00:17:39.887 }, 00:17:39.887 { 00:17:39.887 "name": "BaseBdev4", 00:17:39.887 "uuid": "fb9adb81-40f7-505e-b65c-530d44026541", 00:17:39.887 "is_configured": true, 00:17:39.887 "data_offset": 0, 00:17:39.887 "data_size": 65536 00:17:39.887 } 00:17:39.887 ] 00:17:39.887 }' 00:17:39.887 19:15:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:39.887 19:15:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:39.887 19:15:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:39.887 19:15:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:39.887 19:15:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:41.269 19:15:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:41.269 19:15:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:41.269 19:15:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:41.269 19:15:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:41.269 19:15:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:41.269 19:15:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:41.269 19:15:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:41.269 19:15:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.269 19:15:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:41.269 19:15:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:41.269 19:15:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.269 19:15:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:41.269 "name": "raid_bdev1", 00:17:41.269 "uuid": "7a4b2c1b-35e8-4d88-9956-3899417fb381", 00:17:41.269 "strip_size_kb": 64, 00:17:41.269 "state": "online", 00:17:41.269 "raid_level": "raid5f", 00:17:41.269 "superblock": false, 00:17:41.269 "num_base_bdevs": 4, 00:17:41.269 "num_base_bdevs_discovered": 4, 00:17:41.269 "num_base_bdevs_operational": 4, 00:17:41.269 "base_bdevs_list": [ 00:17:41.269 { 00:17:41.269 "name": "spare", 00:17:41.269 "uuid": "6c3a812f-d080-56bf-a3ec-b3b9355de6ee", 00:17:41.269 "is_configured": true, 00:17:41.269 "data_offset": 0, 00:17:41.269 "data_size": 65536 00:17:41.269 }, 00:17:41.269 { 00:17:41.269 "name": "BaseBdev2", 00:17:41.269 "uuid": "604b08e0-8025-524f-9ca6-edd9c1bf2539", 00:17:41.269 "is_configured": true, 00:17:41.269 "data_offset": 0, 00:17:41.269 "data_size": 65536 00:17:41.269 }, 00:17:41.269 { 00:17:41.269 "name": "BaseBdev3", 00:17:41.269 "uuid": "0af2309a-e004-5b9b-bc98-c4abccb6d613", 00:17:41.269 "is_configured": true, 00:17:41.269 "data_offset": 0, 00:17:41.269 "data_size": 65536 00:17:41.269 }, 00:17:41.269 { 00:17:41.269 "name": "BaseBdev4", 00:17:41.269 "uuid": "fb9adb81-40f7-505e-b65c-530d44026541", 00:17:41.269 "is_configured": true, 00:17:41.269 "data_offset": 0, 00:17:41.269 "data_size": 65536 00:17:41.269 } 00:17:41.269 ] 00:17:41.269 }' 00:17:41.269 19:15:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:41.269 19:15:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:41.269 19:15:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:41.269 19:15:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:41.269 19:15:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:17:41.269 19:15:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:41.269 19:15:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:41.269 19:15:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:41.269 19:15:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:41.269 19:15:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:41.269 19:15:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:41.269 19:15:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:41.269 19:15:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.269 19:15:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:41.269 19:15:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.269 19:15:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:41.269 "name": "raid_bdev1", 00:17:41.269 "uuid": "7a4b2c1b-35e8-4d88-9956-3899417fb381", 00:17:41.269 "strip_size_kb": 64, 00:17:41.269 "state": "online", 00:17:41.269 "raid_level": "raid5f", 00:17:41.269 "superblock": false, 00:17:41.269 "num_base_bdevs": 4, 00:17:41.269 "num_base_bdevs_discovered": 4, 00:17:41.269 "num_base_bdevs_operational": 4, 00:17:41.269 "base_bdevs_list": [ 00:17:41.269 { 00:17:41.269 "name": "spare", 00:17:41.269 "uuid": "6c3a812f-d080-56bf-a3ec-b3b9355de6ee", 00:17:41.269 "is_configured": true, 00:17:41.269 "data_offset": 0, 00:17:41.269 "data_size": 65536 00:17:41.269 }, 00:17:41.269 { 00:17:41.269 "name": "BaseBdev2", 00:17:41.269 "uuid": "604b08e0-8025-524f-9ca6-edd9c1bf2539", 00:17:41.269 "is_configured": true, 00:17:41.269 "data_offset": 0, 00:17:41.269 "data_size": 65536 00:17:41.269 }, 00:17:41.269 { 00:17:41.269 "name": "BaseBdev3", 00:17:41.269 "uuid": "0af2309a-e004-5b9b-bc98-c4abccb6d613", 00:17:41.269 "is_configured": true, 00:17:41.269 "data_offset": 0, 00:17:41.269 "data_size": 65536 00:17:41.269 }, 00:17:41.269 { 00:17:41.269 "name": "BaseBdev4", 00:17:41.269 "uuid": "fb9adb81-40f7-505e-b65c-530d44026541", 00:17:41.269 "is_configured": true, 00:17:41.269 "data_offset": 0, 00:17:41.269 "data_size": 65536 00:17:41.269 } 00:17:41.269 ] 00:17:41.269 }' 00:17:41.269 19:15:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:41.269 19:15:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:41.269 19:15:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:41.269 19:15:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:41.269 19:15:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:17:41.269 19:15:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:41.269 19:15:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:41.269 19:15:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:41.269 19:15:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:41.269 19:15:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:41.269 19:15:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:41.269 19:15:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:41.269 19:15:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:41.269 19:15:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:41.269 19:15:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:41.269 19:15:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:41.269 19:15:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.269 19:15:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:41.269 19:15:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.269 19:15:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:41.269 "name": "raid_bdev1", 00:17:41.269 "uuid": "7a4b2c1b-35e8-4d88-9956-3899417fb381", 00:17:41.269 "strip_size_kb": 64, 00:17:41.269 "state": "online", 00:17:41.269 "raid_level": "raid5f", 00:17:41.269 "superblock": false, 00:17:41.269 "num_base_bdevs": 4, 00:17:41.269 "num_base_bdevs_discovered": 4, 00:17:41.269 "num_base_bdevs_operational": 4, 00:17:41.269 "base_bdevs_list": [ 00:17:41.269 { 00:17:41.269 "name": "spare", 00:17:41.269 "uuid": "6c3a812f-d080-56bf-a3ec-b3b9355de6ee", 00:17:41.269 "is_configured": true, 00:17:41.269 "data_offset": 0, 00:17:41.269 "data_size": 65536 00:17:41.269 }, 00:17:41.269 { 00:17:41.269 "name": "BaseBdev2", 00:17:41.269 "uuid": "604b08e0-8025-524f-9ca6-edd9c1bf2539", 00:17:41.269 "is_configured": true, 00:17:41.269 "data_offset": 0, 00:17:41.269 "data_size": 65536 00:17:41.269 }, 00:17:41.269 { 00:17:41.269 "name": "BaseBdev3", 00:17:41.269 "uuid": "0af2309a-e004-5b9b-bc98-c4abccb6d613", 00:17:41.269 "is_configured": true, 00:17:41.269 "data_offset": 0, 00:17:41.269 "data_size": 65536 00:17:41.269 }, 00:17:41.269 { 00:17:41.269 "name": "BaseBdev4", 00:17:41.269 "uuid": "fb9adb81-40f7-505e-b65c-530d44026541", 00:17:41.269 "is_configured": true, 00:17:41.269 "data_offset": 0, 00:17:41.269 "data_size": 65536 00:17:41.269 } 00:17:41.269 ] 00:17:41.269 }' 00:17:41.269 19:15:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:41.269 19:15:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:41.840 19:15:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:41.840 19:15:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.840 19:15:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:41.840 [2024-11-27 19:15:51.200641] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:41.840 [2024-11-27 19:15:51.200731] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:41.840 [2024-11-27 19:15:51.200839] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:41.840 [2024-11-27 19:15:51.200947] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:41.840 [2024-11-27 19:15:51.201030] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:41.840 19:15:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.840 19:15:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:41.840 19:15:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:17:41.840 19:15:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.840 19:15:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:41.840 19:15:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.840 19:15:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:41.840 19:15:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:17:41.840 19:15:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:17:41.840 19:15:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:17:41.840 19:15:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:41.840 19:15:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:17:41.840 19:15:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:41.840 19:15:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:41.840 19:15:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:41.840 19:15:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:17:41.840 19:15:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:41.840 19:15:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:41.840 19:15:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:17:41.840 /dev/nbd0 00:17:42.099 19:15:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:42.099 19:15:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:42.099 19:15:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:42.099 19:15:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:17:42.099 19:15:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:42.099 19:15:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:42.099 19:15:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:42.099 19:15:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:17:42.099 19:15:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:42.099 19:15:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:42.099 19:15:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:42.099 1+0 records in 00:17:42.099 1+0 records out 00:17:42.099 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0004697 s, 8.7 MB/s 00:17:42.099 19:15:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:42.099 19:15:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:17:42.099 19:15:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:42.099 19:15:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:42.099 19:15:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:17:42.100 19:15:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:42.100 19:15:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:42.100 19:15:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:17:42.100 /dev/nbd1 00:17:42.359 19:15:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:42.359 19:15:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:42.359 19:15:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:17:42.359 19:15:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:17:42.359 19:15:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:42.359 19:15:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:42.359 19:15:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:17:42.359 19:15:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:17:42.359 19:15:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:42.359 19:15:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:42.359 19:15:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:42.360 1+0 records in 00:17:42.360 1+0 records out 00:17:42.360 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000564947 s, 7.3 MB/s 00:17:42.360 19:15:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:42.360 19:15:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:17:42.360 19:15:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:42.360 19:15:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:42.360 19:15:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:17:42.360 19:15:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:42.360 19:15:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:42.360 19:15:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:17:42.360 19:15:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:17:42.360 19:15:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:42.360 19:15:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:42.360 19:15:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:42.360 19:15:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:17:42.360 19:15:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:42.360 19:15:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:42.620 19:15:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:42.620 19:15:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:42.620 19:15:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:42.620 19:15:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:42.620 19:15:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:42.620 19:15:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:42.620 19:15:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:17:42.620 19:15:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:17:42.620 19:15:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:42.620 19:15:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:42.880 19:15:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:42.880 19:15:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:42.880 19:15:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:42.880 19:15:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:42.880 19:15:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:42.880 19:15:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:42.880 19:15:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:17:42.880 19:15:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:17:42.880 19:15:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:17:42.880 19:15:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 84670 00:17:42.880 19:15:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 84670 ']' 00:17:42.880 19:15:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 84670 00:17:42.880 19:15:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:17:42.880 19:15:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:42.880 19:15:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84670 00:17:42.880 19:15:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:42.880 19:15:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:42.880 19:15:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84670' 00:17:42.880 killing process with pid 84670 00:17:42.880 19:15:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@973 -- # kill 84670 00:17:42.880 Received shutdown signal, test time was about 60.000000 seconds 00:17:42.880 00:17:42.880 Latency(us) 00:17:42.880 [2024-11-27T19:15:52.516Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:42.880 [2024-11-27T19:15:52.516Z] =================================================================================================================== 00:17:42.880 [2024-11-27T19:15:52.516Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:42.880 [2024-11-27 19:15:52.437652] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:42.880 19:15:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@978 -- # wait 84670 00:17:43.450 [2024-11-27 19:15:52.899513] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:44.391 19:15:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:17:44.391 00:17:44.391 real 0m20.027s 00:17:44.391 user 0m23.832s 00:17:44.391 sys 0m2.393s 00:17:44.391 19:15:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:44.391 19:15:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:44.391 ************************************ 00:17:44.391 END TEST raid5f_rebuild_test 00:17:44.391 ************************************ 00:17:44.391 19:15:53 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 4 true false true 00:17:44.391 19:15:53 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:17:44.391 19:15:53 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:44.391 19:15:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:44.391 ************************************ 00:17:44.391 START TEST raid5f_rebuild_test_sb 00:17:44.391 ************************************ 00:17:44.391 19:15:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 4 true false true 00:17:44.391 19:15:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:17:44.391 19:15:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:17:44.391 19:15:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:17:44.391 19:15:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:17:44.391 19:15:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:17:44.391 19:15:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:17:44.391 19:15:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:44.392 19:15:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:17:44.392 19:15:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:44.392 19:15:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:44.392 19:15:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:17:44.392 19:15:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:44.392 19:15:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:44.392 19:15:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:17:44.392 19:15:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:44.392 19:15:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:44.392 19:15:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:17:44.392 19:15:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:44.392 19:15:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:44.392 19:15:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:17:44.392 19:15:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:17:44.392 19:15:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:17:44.392 19:15:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:17:44.392 19:15:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:17:44.392 19:15:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:17:44.392 19:15:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:17:44.392 19:15:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:17:44.392 19:15:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:17:44.392 19:15:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:17:44.392 19:15:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:17:44.392 19:15:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:17:44.392 19:15:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:17:44.392 19:15:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=85186 00:17:44.392 19:15:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 85186 00:17:44.392 19:15:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:44.392 19:15:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 85186 ']' 00:17:44.653 19:15:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:44.653 19:15:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:44.653 19:15:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:44.653 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:44.653 19:15:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:44.653 19:15:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:44.653 I/O size of 3145728 is greater than zero copy threshold (65536). 00:17:44.653 Zero copy mechanism will not be used. 00:17:44.653 [2024-11-27 19:15:54.123289] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:17:44.653 [2024-11-27 19:15:54.123407] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85186 ] 00:17:44.914 [2024-11-27 19:15:54.303955] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:44.914 [2024-11-27 19:15:54.410933] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:45.174 [2024-11-27 19:15:54.598863] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:45.174 [2024-11-27 19:15:54.598921] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:45.434 19:15:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:45.434 19:15:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:17:45.434 19:15:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:45.434 19:15:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:45.434 19:15:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.434 19:15:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:45.434 BaseBdev1_malloc 00:17:45.434 19:15:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.434 19:15:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:45.434 19:15:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.434 19:15:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:45.434 [2024-11-27 19:15:54.969563] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:45.434 [2024-11-27 19:15:54.969675] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:45.434 [2024-11-27 19:15:54.969740] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:45.434 [2024-11-27 19:15:54.969773] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:45.434 [2024-11-27 19:15:54.971853] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:45.434 [2024-11-27 19:15:54.971928] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:45.434 BaseBdev1 00:17:45.434 19:15:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.435 19:15:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:45.435 19:15:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:45.435 19:15:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.435 19:15:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:45.435 BaseBdev2_malloc 00:17:45.435 19:15:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.435 19:15:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:17:45.435 19:15:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.435 19:15:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:45.435 [2024-11-27 19:15:55.019086] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:17:45.435 [2024-11-27 19:15:55.019158] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:45.435 [2024-11-27 19:15:55.019180] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:45.435 [2024-11-27 19:15:55.019190] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:45.435 [2024-11-27 19:15:55.021194] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:45.435 [2024-11-27 19:15:55.021264] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:45.435 BaseBdev2 00:17:45.435 19:15:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.435 19:15:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:45.435 19:15:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:17:45.435 19:15:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.435 19:15:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:45.695 BaseBdev3_malloc 00:17:45.695 19:15:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.695 19:15:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:17:45.695 19:15:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.695 19:15:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:45.695 [2024-11-27 19:15:55.103083] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:17:45.695 [2024-11-27 19:15:55.103132] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:45.695 [2024-11-27 19:15:55.103168] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:45.695 [2024-11-27 19:15:55.103179] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:45.695 [2024-11-27 19:15:55.105138] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:45.695 [2024-11-27 19:15:55.105177] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:17:45.695 BaseBdev3 00:17:45.695 19:15:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.695 19:15:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:45.695 19:15:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:17:45.695 19:15:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.695 19:15:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:45.695 BaseBdev4_malloc 00:17:45.695 19:15:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.695 19:15:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:17:45.695 19:15:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.695 19:15:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:45.695 [2024-11-27 19:15:55.152254] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:17:45.695 [2024-11-27 19:15:55.152310] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:45.695 [2024-11-27 19:15:55.152328] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:17:45.695 [2024-11-27 19:15:55.152338] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:45.695 [2024-11-27 19:15:55.154341] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:45.695 [2024-11-27 19:15:55.154415] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:17:45.695 BaseBdev4 00:17:45.695 19:15:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.695 19:15:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:17:45.695 19:15:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.695 19:15:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:45.695 spare_malloc 00:17:45.695 19:15:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.695 19:15:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:17:45.695 19:15:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.695 19:15:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:45.695 spare_delay 00:17:45.695 19:15:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.695 19:15:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:45.695 19:15:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.695 19:15:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:45.695 [2024-11-27 19:15:55.217048] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:45.695 [2024-11-27 19:15:55.217094] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:45.695 [2024-11-27 19:15:55.217126] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:17:45.695 [2024-11-27 19:15:55.217137] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:45.695 [2024-11-27 19:15:55.219128] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:45.695 [2024-11-27 19:15:55.219225] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:45.695 spare 00:17:45.695 19:15:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.695 19:15:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:17:45.695 19:15:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.695 19:15:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:45.695 [2024-11-27 19:15:55.229065] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:45.695 [2024-11-27 19:15:55.230770] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:45.695 [2024-11-27 19:15:55.230828] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:45.695 [2024-11-27 19:15:55.230877] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:45.695 [2024-11-27 19:15:55.231058] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:45.695 [2024-11-27 19:15:55.231071] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:45.695 [2024-11-27 19:15:55.231293] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:45.695 [2024-11-27 19:15:55.237954] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:45.695 [2024-11-27 19:15:55.238013] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:45.695 [2024-11-27 19:15:55.238200] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:45.695 19:15:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.696 19:15:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:17:45.696 19:15:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:45.696 19:15:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:45.696 19:15:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:45.696 19:15:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:45.696 19:15:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:45.696 19:15:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:45.696 19:15:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:45.696 19:15:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:45.696 19:15:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:45.696 19:15:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:45.696 19:15:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:45.696 19:15:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.696 19:15:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:45.696 19:15:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.696 19:15:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:45.696 "name": "raid_bdev1", 00:17:45.696 "uuid": "38189b02-96f1-40a0-b253-619f934f552b", 00:17:45.696 "strip_size_kb": 64, 00:17:45.696 "state": "online", 00:17:45.696 "raid_level": "raid5f", 00:17:45.696 "superblock": true, 00:17:45.696 "num_base_bdevs": 4, 00:17:45.696 "num_base_bdevs_discovered": 4, 00:17:45.696 "num_base_bdevs_operational": 4, 00:17:45.696 "base_bdevs_list": [ 00:17:45.696 { 00:17:45.696 "name": "BaseBdev1", 00:17:45.696 "uuid": "f29a5db3-be06-5110-a753-a07ca442f6c4", 00:17:45.696 "is_configured": true, 00:17:45.696 "data_offset": 2048, 00:17:45.696 "data_size": 63488 00:17:45.696 }, 00:17:45.696 { 00:17:45.696 "name": "BaseBdev2", 00:17:45.696 "uuid": "e1cb6cff-b725-5d21-b3c1-b8395aa2426f", 00:17:45.696 "is_configured": true, 00:17:45.696 "data_offset": 2048, 00:17:45.696 "data_size": 63488 00:17:45.696 }, 00:17:45.696 { 00:17:45.696 "name": "BaseBdev3", 00:17:45.696 "uuid": "4b49db90-0e2b-5bc7-9f0d-312481031702", 00:17:45.696 "is_configured": true, 00:17:45.696 "data_offset": 2048, 00:17:45.696 "data_size": 63488 00:17:45.696 }, 00:17:45.696 { 00:17:45.696 "name": "BaseBdev4", 00:17:45.696 "uuid": "a3d2a363-c4c3-59ad-954c-37d9fe5b77f7", 00:17:45.696 "is_configured": true, 00:17:45.696 "data_offset": 2048, 00:17:45.696 "data_size": 63488 00:17:45.696 } 00:17:45.696 ] 00:17:45.696 }' 00:17:45.696 19:15:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:45.696 19:15:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:46.265 19:15:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:46.265 19:15:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:17:46.265 19:15:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.265 19:15:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:46.265 [2024-11-27 19:15:55.701646] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:46.265 19:15:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.265 19:15:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=190464 00:17:46.265 19:15:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:46.265 19:15:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.265 19:15:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:17:46.265 19:15:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:46.265 19:15:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.265 19:15:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:17:46.265 19:15:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:17:46.265 19:15:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:17:46.265 19:15:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:17:46.265 19:15:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:17:46.265 19:15:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:46.265 19:15:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:17:46.265 19:15:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:46.265 19:15:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:17:46.265 19:15:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:46.265 19:15:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:17:46.265 19:15:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:46.265 19:15:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:46.265 19:15:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:17:46.525 [2024-11-27 19:15:55.953089] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:17:46.525 /dev/nbd0 00:17:46.525 19:15:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:46.525 19:15:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:46.525 19:15:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:46.525 19:15:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:17:46.525 19:15:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:46.525 19:15:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:46.525 19:15:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:46.525 19:15:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:17:46.525 19:15:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:46.525 19:15:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:46.525 19:15:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:46.525 1+0 records in 00:17:46.525 1+0 records out 00:17:46.525 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000352209 s, 11.6 MB/s 00:17:46.525 19:15:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:46.525 19:15:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:17:46.525 19:15:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:46.525 19:15:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:46.525 19:15:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:17:46.525 19:15:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:46.525 19:15:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:46.525 19:15:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:17:46.525 19:15:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:17:46.525 19:15:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 192 00:17:46.525 19:15:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=496 oflag=direct 00:17:47.094 496+0 records in 00:17:47.094 496+0 records out 00:17:47.094 97517568 bytes (98 MB, 93 MiB) copied, 0.459472 s, 212 MB/s 00:17:47.094 19:15:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:17:47.094 19:15:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:47.094 19:15:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:47.094 19:15:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:47.094 19:15:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:17:47.094 19:15:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:47.094 19:15:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:47.094 [2024-11-27 19:15:56.686235] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:47.094 19:15:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:47.094 19:15:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:47.094 19:15:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:47.094 19:15:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:47.094 19:15:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:47.094 19:15:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:47.094 19:15:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:17:47.094 19:15:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:17:47.094 19:15:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:17:47.094 19:15:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.094 19:15:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:47.354 [2024-11-27 19:15:56.730620] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:47.354 19:15:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.354 19:15:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:47.354 19:15:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:47.354 19:15:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:47.354 19:15:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:47.354 19:15:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:47.354 19:15:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:47.354 19:15:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:47.354 19:15:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:47.354 19:15:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:47.354 19:15:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:47.354 19:15:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:47.354 19:15:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:47.354 19:15:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.354 19:15:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:47.354 19:15:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.354 19:15:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:47.354 "name": "raid_bdev1", 00:17:47.354 "uuid": "38189b02-96f1-40a0-b253-619f934f552b", 00:17:47.354 "strip_size_kb": 64, 00:17:47.354 "state": "online", 00:17:47.354 "raid_level": "raid5f", 00:17:47.354 "superblock": true, 00:17:47.354 "num_base_bdevs": 4, 00:17:47.354 "num_base_bdevs_discovered": 3, 00:17:47.354 "num_base_bdevs_operational": 3, 00:17:47.354 "base_bdevs_list": [ 00:17:47.354 { 00:17:47.354 "name": null, 00:17:47.354 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:47.354 "is_configured": false, 00:17:47.354 "data_offset": 0, 00:17:47.354 "data_size": 63488 00:17:47.354 }, 00:17:47.354 { 00:17:47.354 "name": "BaseBdev2", 00:17:47.354 "uuid": "e1cb6cff-b725-5d21-b3c1-b8395aa2426f", 00:17:47.354 "is_configured": true, 00:17:47.354 "data_offset": 2048, 00:17:47.354 "data_size": 63488 00:17:47.354 }, 00:17:47.354 { 00:17:47.354 "name": "BaseBdev3", 00:17:47.354 "uuid": "4b49db90-0e2b-5bc7-9f0d-312481031702", 00:17:47.354 "is_configured": true, 00:17:47.354 "data_offset": 2048, 00:17:47.354 "data_size": 63488 00:17:47.354 }, 00:17:47.354 { 00:17:47.354 "name": "BaseBdev4", 00:17:47.354 "uuid": "a3d2a363-c4c3-59ad-954c-37d9fe5b77f7", 00:17:47.354 "is_configured": true, 00:17:47.354 "data_offset": 2048, 00:17:47.354 "data_size": 63488 00:17:47.354 } 00:17:47.354 ] 00:17:47.354 }' 00:17:47.354 19:15:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:47.354 19:15:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:47.614 19:15:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:47.614 19:15:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.614 19:15:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:47.614 [2024-11-27 19:15:57.189825] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:47.614 [2024-11-27 19:15:57.204725] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002aa50 00:17:47.614 19:15:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.615 19:15:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:17:47.615 [2024-11-27 19:15:57.214011] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:48.996 19:15:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:48.996 19:15:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:48.996 19:15:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:48.996 19:15:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:48.996 19:15:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:48.996 19:15:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:48.996 19:15:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:48.996 19:15:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.996 19:15:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:48.996 19:15:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.996 19:15:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:48.996 "name": "raid_bdev1", 00:17:48.996 "uuid": "38189b02-96f1-40a0-b253-619f934f552b", 00:17:48.996 "strip_size_kb": 64, 00:17:48.996 "state": "online", 00:17:48.996 "raid_level": "raid5f", 00:17:48.996 "superblock": true, 00:17:48.996 "num_base_bdevs": 4, 00:17:48.996 "num_base_bdevs_discovered": 4, 00:17:48.996 "num_base_bdevs_operational": 4, 00:17:48.996 "process": { 00:17:48.996 "type": "rebuild", 00:17:48.996 "target": "spare", 00:17:48.996 "progress": { 00:17:48.996 "blocks": 19200, 00:17:48.996 "percent": 10 00:17:48.996 } 00:17:48.996 }, 00:17:48.996 "base_bdevs_list": [ 00:17:48.996 { 00:17:48.996 "name": "spare", 00:17:48.996 "uuid": "edcb1069-abed-5ae6-8037-d796586818cb", 00:17:48.996 "is_configured": true, 00:17:48.996 "data_offset": 2048, 00:17:48.996 "data_size": 63488 00:17:48.996 }, 00:17:48.996 { 00:17:48.996 "name": "BaseBdev2", 00:17:48.996 "uuid": "e1cb6cff-b725-5d21-b3c1-b8395aa2426f", 00:17:48.996 "is_configured": true, 00:17:48.996 "data_offset": 2048, 00:17:48.996 "data_size": 63488 00:17:48.996 }, 00:17:48.996 { 00:17:48.996 "name": "BaseBdev3", 00:17:48.996 "uuid": "4b49db90-0e2b-5bc7-9f0d-312481031702", 00:17:48.996 "is_configured": true, 00:17:48.996 "data_offset": 2048, 00:17:48.996 "data_size": 63488 00:17:48.996 }, 00:17:48.996 { 00:17:48.996 "name": "BaseBdev4", 00:17:48.996 "uuid": "a3d2a363-c4c3-59ad-954c-37d9fe5b77f7", 00:17:48.996 "is_configured": true, 00:17:48.996 "data_offset": 2048, 00:17:48.996 "data_size": 63488 00:17:48.996 } 00:17:48.996 ] 00:17:48.996 }' 00:17:48.996 19:15:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:48.996 19:15:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:48.996 19:15:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:48.996 19:15:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:48.996 19:15:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:48.996 19:15:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.996 19:15:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:48.996 [2024-11-27 19:15:58.364749] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:48.996 [2024-11-27 19:15:58.419676] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:48.996 [2024-11-27 19:15:58.419777] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:48.996 [2024-11-27 19:15:58.419794] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:48.996 [2024-11-27 19:15:58.419804] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:48.996 19:15:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.996 19:15:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:48.996 19:15:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:48.996 19:15:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:48.996 19:15:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:48.996 19:15:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:48.996 19:15:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:48.996 19:15:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:48.996 19:15:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:48.996 19:15:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:48.996 19:15:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:48.996 19:15:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:48.997 19:15:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.997 19:15:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:48.997 19:15:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:48.997 19:15:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.997 19:15:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:48.997 "name": "raid_bdev1", 00:17:48.997 "uuid": "38189b02-96f1-40a0-b253-619f934f552b", 00:17:48.997 "strip_size_kb": 64, 00:17:48.997 "state": "online", 00:17:48.997 "raid_level": "raid5f", 00:17:48.997 "superblock": true, 00:17:48.997 "num_base_bdevs": 4, 00:17:48.997 "num_base_bdevs_discovered": 3, 00:17:48.997 "num_base_bdevs_operational": 3, 00:17:48.997 "base_bdevs_list": [ 00:17:48.997 { 00:17:48.997 "name": null, 00:17:48.997 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:48.997 "is_configured": false, 00:17:48.997 "data_offset": 0, 00:17:48.997 "data_size": 63488 00:17:48.997 }, 00:17:48.997 { 00:17:48.997 "name": "BaseBdev2", 00:17:48.997 "uuid": "e1cb6cff-b725-5d21-b3c1-b8395aa2426f", 00:17:48.997 "is_configured": true, 00:17:48.997 "data_offset": 2048, 00:17:48.997 "data_size": 63488 00:17:48.997 }, 00:17:48.997 { 00:17:48.997 "name": "BaseBdev3", 00:17:48.997 "uuid": "4b49db90-0e2b-5bc7-9f0d-312481031702", 00:17:48.997 "is_configured": true, 00:17:48.997 "data_offset": 2048, 00:17:48.997 "data_size": 63488 00:17:48.997 }, 00:17:48.997 { 00:17:48.997 "name": "BaseBdev4", 00:17:48.997 "uuid": "a3d2a363-c4c3-59ad-954c-37d9fe5b77f7", 00:17:48.997 "is_configured": true, 00:17:48.997 "data_offset": 2048, 00:17:48.997 "data_size": 63488 00:17:48.997 } 00:17:48.997 ] 00:17:48.997 }' 00:17:48.997 19:15:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:48.997 19:15:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:49.568 19:15:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:49.568 19:15:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:49.568 19:15:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:49.568 19:15:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:49.568 19:15:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:49.568 19:15:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:49.568 19:15:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:49.568 19:15:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.568 19:15:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:49.568 19:15:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.568 19:15:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:49.568 "name": "raid_bdev1", 00:17:49.568 "uuid": "38189b02-96f1-40a0-b253-619f934f552b", 00:17:49.568 "strip_size_kb": 64, 00:17:49.568 "state": "online", 00:17:49.568 "raid_level": "raid5f", 00:17:49.568 "superblock": true, 00:17:49.568 "num_base_bdevs": 4, 00:17:49.568 "num_base_bdevs_discovered": 3, 00:17:49.568 "num_base_bdevs_operational": 3, 00:17:49.568 "base_bdevs_list": [ 00:17:49.568 { 00:17:49.568 "name": null, 00:17:49.568 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:49.568 "is_configured": false, 00:17:49.568 "data_offset": 0, 00:17:49.568 "data_size": 63488 00:17:49.568 }, 00:17:49.568 { 00:17:49.568 "name": "BaseBdev2", 00:17:49.568 "uuid": "e1cb6cff-b725-5d21-b3c1-b8395aa2426f", 00:17:49.568 "is_configured": true, 00:17:49.568 "data_offset": 2048, 00:17:49.568 "data_size": 63488 00:17:49.568 }, 00:17:49.568 { 00:17:49.568 "name": "BaseBdev3", 00:17:49.568 "uuid": "4b49db90-0e2b-5bc7-9f0d-312481031702", 00:17:49.568 "is_configured": true, 00:17:49.568 "data_offset": 2048, 00:17:49.568 "data_size": 63488 00:17:49.568 }, 00:17:49.568 { 00:17:49.568 "name": "BaseBdev4", 00:17:49.568 "uuid": "a3d2a363-c4c3-59ad-954c-37d9fe5b77f7", 00:17:49.568 "is_configured": true, 00:17:49.568 "data_offset": 2048, 00:17:49.568 "data_size": 63488 00:17:49.568 } 00:17:49.568 ] 00:17:49.568 }' 00:17:49.568 19:15:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:49.568 19:15:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:49.568 19:15:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:49.568 19:15:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:49.568 19:15:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:49.568 19:15:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.568 19:15:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:49.568 [2024-11-27 19:15:59.039760] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:49.568 [2024-11-27 19:15:59.053947] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002ab20 00:17:49.568 19:15:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.568 19:15:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:17:49.568 [2024-11-27 19:15:59.062871] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:50.511 19:16:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:50.511 19:16:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:50.511 19:16:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:50.511 19:16:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:50.511 19:16:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:50.511 19:16:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:50.511 19:16:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.511 19:16:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:50.511 19:16:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:50.511 19:16:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.511 19:16:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:50.511 "name": "raid_bdev1", 00:17:50.511 "uuid": "38189b02-96f1-40a0-b253-619f934f552b", 00:17:50.511 "strip_size_kb": 64, 00:17:50.511 "state": "online", 00:17:50.511 "raid_level": "raid5f", 00:17:50.511 "superblock": true, 00:17:50.511 "num_base_bdevs": 4, 00:17:50.511 "num_base_bdevs_discovered": 4, 00:17:50.511 "num_base_bdevs_operational": 4, 00:17:50.511 "process": { 00:17:50.511 "type": "rebuild", 00:17:50.511 "target": "spare", 00:17:50.511 "progress": { 00:17:50.511 "blocks": 19200, 00:17:50.511 "percent": 10 00:17:50.511 } 00:17:50.511 }, 00:17:50.511 "base_bdevs_list": [ 00:17:50.511 { 00:17:50.511 "name": "spare", 00:17:50.511 "uuid": "edcb1069-abed-5ae6-8037-d796586818cb", 00:17:50.511 "is_configured": true, 00:17:50.511 "data_offset": 2048, 00:17:50.511 "data_size": 63488 00:17:50.511 }, 00:17:50.511 { 00:17:50.511 "name": "BaseBdev2", 00:17:50.511 "uuid": "e1cb6cff-b725-5d21-b3c1-b8395aa2426f", 00:17:50.511 "is_configured": true, 00:17:50.511 "data_offset": 2048, 00:17:50.511 "data_size": 63488 00:17:50.511 }, 00:17:50.511 { 00:17:50.511 "name": "BaseBdev3", 00:17:50.511 "uuid": "4b49db90-0e2b-5bc7-9f0d-312481031702", 00:17:50.511 "is_configured": true, 00:17:50.511 "data_offset": 2048, 00:17:50.511 "data_size": 63488 00:17:50.511 }, 00:17:50.511 { 00:17:50.511 "name": "BaseBdev4", 00:17:50.511 "uuid": "a3d2a363-c4c3-59ad-954c-37d9fe5b77f7", 00:17:50.511 "is_configured": true, 00:17:50.511 "data_offset": 2048, 00:17:50.511 "data_size": 63488 00:17:50.511 } 00:17:50.511 ] 00:17:50.511 }' 00:17:50.511 19:16:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:50.772 19:16:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:50.772 19:16:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:50.772 19:16:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:50.772 19:16:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:17:50.772 19:16:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:17:50.772 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:17:50.772 19:16:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:17:50.772 19:16:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:17:50.772 19:16:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=642 00:17:50.772 19:16:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:50.772 19:16:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:50.772 19:16:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:50.772 19:16:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:50.772 19:16:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:50.772 19:16:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:50.772 19:16:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:50.772 19:16:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.772 19:16:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:50.772 19:16:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:50.772 19:16:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.772 19:16:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:50.772 "name": "raid_bdev1", 00:17:50.772 "uuid": "38189b02-96f1-40a0-b253-619f934f552b", 00:17:50.772 "strip_size_kb": 64, 00:17:50.772 "state": "online", 00:17:50.772 "raid_level": "raid5f", 00:17:50.772 "superblock": true, 00:17:50.772 "num_base_bdevs": 4, 00:17:50.772 "num_base_bdevs_discovered": 4, 00:17:50.772 "num_base_bdevs_operational": 4, 00:17:50.772 "process": { 00:17:50.772 "type": "rebuild", 00:17:50.772 "target": "spare", 00:17:50.772 "progress": { 00:17:50.772 "blocks": 21120, 00:17:50.772 "percent": 11 00:17:50.772 } 00:17:50.772 }, 00:17:50.772 "base_bdevs_list": [ 00:17:50.772 { 00:17:50.772 "name": "spare", 00:17:50.772 "uuid": "edcb1069-abed-5ae6-8037-d796586818cb", 00:17:50.772 "is_configured": true, 00:17:50.772 "data_offset": 2048, 00:17:50.772 "data_size": 63488 00:17:50.772 }, 00:17:50.772 { 00:17:50.772 "name": "BaseBdev2", 00:17:50.772 "uuid": "e1cb6cff-b725-5d21-b3c1-b8395aa2426f", 00:17:50.772 "is_configured": true, 00:17:50.772 "data_offset": 2048, 00:17:50.772 "data_size": 63488 00:17:50.772 }, 00:17:50.772 { 00:17:50.772 "name": "BaseBdev3", 00:17:50.772 "uuid": "4b49db90-0e2b-5bc7-9f0d-312481031702", 00:17:50.772 "is_configured": true, 00:17:50.772 "data_offset": 2048, 00:17:50.772 "data_size": 63488 00:17:50.772 }, 00:17:50.772 { 00:17:50.772 "name": "BaseBdev4", 00:17:50.772 "uuid": "a3d2a363-c4c3-59ad-954c-37d9fe5b77f7", 00:17:50.772 "is_configured": true, 00:17:50.772 "data_offset": 2048, 00:17:50.772 "data_size": 63488 00:17:50.772 } 00:17:50.772 ] 00:17:50.772 }' 00:17:50.773 19:16:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:50.773 19:16:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:50.773 19:16:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:50.773 19:16:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:50.773 19:16:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:51.716 19:16:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:51.716 19:16:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:51.716 19:16:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:51.716 19:16:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:51.716 19:16:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:51.716 19:16:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:51.716 19:16:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:51.716 19:16:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.716 19:16:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:51.716 19:16:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:51.977 19:16:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.977 19:16:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:51.977 "name": "raid_bdev1", 00:17:51.977 "uuid": "38189b02-96f1-40a0-b253-619f934f552b", 00:17:51.977 "strip_size_kb": 64, 00:17:51.977 "state": "online", 00:17:51.977 "raid_level": "raid5f", 00:17:51.977 "superblock": true, 00:17:51.977 "num_base_bdevs": 4, 00:17:51.977 "num_base_bdevs_discovered": 4, 00:17:51.977 "num_base_bdevs_operational": 4, 00:17:51.977 "process": { 00:17:51.977 "type": "rebuild", 00:17:51.977 "target": "spare", 00:17:51.977 "progress": { 00:17:51.977 "blocks": 42240, 00:17:51.977 "percent": 22 00:17:51.977 } 00:17:51.977 }, 00:17:51.977 "base_bdevs_list": [ 00:17:51.977 { 00:17:51.977 "name": "spare", 00:17:51.977 "uuid": "edcb1069-abed-5ae6-8037-d796586818cb", 00:17:51.977 "is_configured": true, 00:17:51.977 "data_offset": 2048, 00:17:51.977 "data_size": 63488 00:17:51.977 }, 00:17:51.977 { 00:17:51.978 "name": "BaseBdev2", 00:17:51.978 "uuid": "e1cb6cff-b725-5d21-b3c1-b8395aa2426f", 00:17:51.978 "is_configured": true, 00:17:51.978 "data_offset": 2048, 00:17:51.978 "data_size": 63488 00:17:51.978 }, 00:17:51.978 { 00:17:51.978 "name": "BaseBdev3", 00:17:51.978 "uuid": "4b49db90-0e2b-5bc7-9f0d-312481031702", 00:17:51.978 "is_configured": true, 00:17:51.978 "data_offset": 2048, 00:17:51.978 "data_size": 63488 00:17:51.978 }, 00:17:51.978 { 00:17:51.978 "name": "BaseBdev4", 00:17:51.978 "uuid": "a3d2a363-c4c3-59ad-954c-37d9fe5b77f7", 00:17:51.978 "is_configured": true, 00:17:51.978 "data_offset": 2048, 00:17:51.978 "data_size": 63488 00:17:51.978 } 00:17:51.978 ] 00:17:51.978 }' 00:17:51.978 19:16:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:51.978 19:16:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:51.978 19:16:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:51.978 19:16:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:51.978 19:16:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:52.919 19:16:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:52.919 19:16:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:52.919 19:16:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:52.919 19:16:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:52.919 19:16:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:52.919 19:16:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:52.919 19:16:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:52.919 19:16:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.919 19:16:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:52.919 19:16:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:52.919 19:16:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.919 19:16:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:52.919 "name": "raid_bdev1", 00:17:52.919 "uuid": "38189b02-96f1-40a0-b253-619f934f552b", 00:17:52.919 "strip_size_kb": 64, 00:17:52.919 "state": "online", 00:17:52.919 "raid_level": "raid5f", 00:17:52.919 "superblock": true, 00:17:52.919 "num_base_bdevs": 4, 00:17:52.919 "num_base_bdevs_discovered": 4, 00:17:52.919 "num_base_bdevs_operational": 4, 00:17:52.919 "process": { 00:17:52.919 "type": "rebuild", 00:17:52.919 "target": "spare", 00:17:52.919 "progress": { 00:17:52.919 "blocks": 65280, 00:17:52.919 "percent": 34 00:17:52.919 } 00:17:52.919 }, 00:17:52.919 "base_bdevs_list": [ 00:17:52.919 { 00:17:52.919 "name": "spare", 00:17:52.919 "uuid": "edcb1069-abed-5ae6-8037-d796586818cb", 00:17:52.919 "is_configured": true, 00:17:52.919 "data_offset": 2048, 00:17:52.919 "data_size": 63488 00:17:52.919 }, 00:17:52.919 { 00:17:52.919 "name": "BaseBdev2", 00:17:52.919 "uuid": "e1cb6cff-b725-5d21-b3c1-b8395aa2426f", 00:17:52.919 "is_configured": true, 00:17:52.919 "data_offset": 2048, 00:17:52.919 "data_size": 63488 00:17:52.919 }, 00:17:52.919 { 00:17:52.919 "name": "BaseBdev3", 00:17:52.919 "uuid": "4b49db90-0e2b-5bc7-9f0d-312481031702", 00:17:52.919 "is_configured": true, 00:17:52.919 "data_offset": 2048, 00:17:52.919 "data_size": 63488 00:17:52.919 }, 00:17:52.919 { 00:17:52.919 "name": "BaseBdev4", 00:17:52.919 "uuid": "a3d2a363-c4c3-59ad-954c-37d9fe5b77f7", 00:17:52.919 "is_configured": true, 00:17:52.919 "data_offset": 2048, 00:17:52.919 "data_size": 63488 00:17:52.919 } 00:17:52.919 ] 00:17:52.919 }' 00:17:52.919 19:16:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:53.179 19:16:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:53.179 19:16:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:53.179 19:16:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:53.179 19:16:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:54.119 19:16:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:54.119 19:16:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:54.119 19:16:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:54.119 19:16:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:54.119 19:16:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:54.119 19:16:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:54.119 19:16:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:54.119 19:16:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:54.119 19:16:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.119 19:16:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:54.119 19:16:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.119 19:16:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:54.119 "name": "raid_bdev1", 00:17:54.119 "uuid": "38189b02-96f1-40a0-b253-619f934f552b", 00:17:54.119 "strip_size_kb": 64, 00:17:54.119 "state": "online", 00:17:54.119 "raid_level": "raid5f", 00:17:54.119 "superblock": true, 00:17:54.119 "num_base_bdevs": 4, 00:17:54.119 "num_base_bdevs_discovered": 4, 00:17:54.119 "num_base_bdevs_operational": 4, 00:17:54.119 "process": { 00:17:54.119 "type": "rebuild", 00:17:54.119 "target": "spare", 00:17:54.119 "progress": { 00:17:54.119 "blocks": 86400, 00:17:54.119 "percent": 45 00:17:54.119 } 00:17:54.119 }, 00:17:54.119 "base_bdevs_list": [ 00:17:54.119 { 00:17:54.119 "name": "spare", 00:17:54.119 "uuid": "edcb1069-abed-5ae6-8037-d796586818cb", 00:17:54.119 "is_configured": true, 00:17:54.119 "data_offset": 2048, 00:17:54.119 "data_size": 63488 00:17:54.119 }, 00:17:54.119 { 00:17:54.119 "name": "BaseBdev2", 00:17:54.119 "uuid": "e1cb6cff-b725-5d21-b3c1-b8395aa2426f", 00:17:54.119 "is_configured": true, 00:17:54.119 "data_offset": 2048, 00:17:54.119 "data_size": 63488 00:17:54.119 }, 00:17:54.119 { 00:17:54.119 "name": "BaseBdev3", 00:17:54.119 "uuid": "4b49db90-0e2b-5bc7-9f0d-312481031702", 00:17:54.119 "is_configured": true, 00:17:54.119 "data_offset": 2048, 00:17:54.119 "data_size": 63488 00:17:54.119 }, 00:17:54.119 { 00:17:54.119 "name": "BaseBdev4", 00:17:54.119 "uuid": "a3d2a363-c4c3-59ad-954c-37d9fe5b77f7", 00:17:54.119 "is_configured": true, 00:17:54.119 "data_offset": 2048, 00:17:54.119 "data_size": 63488 00:17:54.119 } 00:17:54.119 ] 00:17:54.119 }' 00:17:54.119 19:16:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:54.384 19:16:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:54.384 19:16:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:54.384 19:16:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:54.384 19:16:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:55.351 19:16:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:55.351 19:16:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:55.351 19:16:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:55.351 19:16:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:55.351 19:16:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:55.351 19:16:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:55.351 19:16:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:55.351 19:16:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:55.351 19:16:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.351 19:16:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:55.351 19:16:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.351 19:16:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:55.351 "name": "raid_bdev1", 00:17:55.351 "uuid": "38189b02-96f1-40a0-b253-619f934f552b", 00:17:55.351 "strip_size_kb": 64, 00:17:55.351 "state": "online", 00:17:55.351 "raid_level": "raid5f", 00:17:55.351 "superblock": true, 00:17:55.351 "num_base_bdevs": 4, 00:17:55.351 "num_base_bdevs_discovered": 4, 00:17:55.351 "num_base_bdevs_operational": 4, 00:17:55.351 "process": { 00:17:55.351 "type": "rebuild", 00:17:55.351 "target": "spare", 00:17:55.351 "progress": { 00:17:55.351 "blocks": 109440, 00:17:55.351 "percent": 57 00:17:55.351 } 00:17:55.351 }, 00:17:55.351 "base_bdevs_list": [ 00:17:55.351 { 00:17:55.351 "name": "spare", 00:17:55.351 "uuid": "edcb1069-abed-5ae6-8037-d796586818cb", 00:17:55.351 "is_configured": true, 00:17:55.351 "data_offset": 2048, 00:17:55.351 "data_size": 63488 00:17:55.351 }, 00:17:55.351 { 00:17:55.351 "name": "BaseBdev2", 00:17:55.351 "uuid": "e1cb6cff-b725-5d21-b3c1-b8395aa2426f", 00:17:55.351 "is_configured": true, 00:17:55.351 "data_offset": 2048, 00:17:55.351 "data_size": 63488 00:17:55.351 }, 00:17:55.351 { 00:17:55.351 "name": "BaseBdev3", 00:17:55.351 "uuid": "4b49db90-0e2b-5bc7-9f0d-312481031702", 00:17:55.351 "is_configured": true, 00:17:55.351 "data_offset": 2048, 00:17:55.351 "data_size": 63488 00:17:55.351 }, 00:17:55.351 { 00:17:55.351 "name": "BaseBdev4", 00:17:55.351 "uuid": "a3d2a363-c4c3-59ad-954c-37d9fe5b77f7", 00:17:55.351 "is_configured": true, 00:17:55.351 "data_offset": 2048, 00:17:55.351 "data_size": 63488 00:17:55.351 } 00:17:55.351 ] 00:17:55.351 }' 00:17:55.351 19:16:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:55.351 19:16:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:55.351 19:16:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:55.351 19:16:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:55.351 19:16:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:56.734 19:16:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:56.734 19:16:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:56.734 19:16:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:56.734 19:16:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:56.734 19:16:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:56.734 19:16:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:56.734 19:16:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:56.734 19:16:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:56.734 19:16:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.734 19:16:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:56.734 19:16:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.734 19:16:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:56.734 "name": "raid_bdev1", 00:17:56.734 "uuid": "38189b02-96f1-40a0-b253-619f934f552b", 00:17:56.734 "strip_size_kb": 64, 00:17:56.734 "state": "online", 00:17:56.734 "raid_level": "raid5f", 00:17:56.734 "superblock": true, 00:17:56.734 "num_base_bdevs": 4, 00:17:56.734 "num_base_bdevs_discovered": 4, 00:17:56.734 "num_base_bdevs_operational": 4, 00:17:56.734 "process": { 00:17:56.734 "type": "rebuild", 00:17:56.734 "target": "spare", 00:17:56.734 "progress": { 00:17:56.734 "blocks": 130560, 00:17:56.734 "percent": 68 00:17:56.734 } 00:17:56.734 }, 00:17:56.734 "base_bdevs_list": [ 00:17:56.734 { 00:17:56.734 "name": "spare", 00:17:56.734 "uuid": "edcb1069-abed-5ae6-8037-d796586818cb", 00:17:56.734 "is_configured": true, 00:17:56.734 "data_offset": 2048, 00:17:56.734 "data_size": 63488 00:17:56.734 }, 00:17:56.734 { 00:17:56.734 "name": "BaseBdev2", 00:17:56.734 "uuid": "e1cb6cff-b725-5d21-b3c1-b8395aa2426f", 00:17:56.734 "is_configured": true, 00:17:56.734 "data_offset": 2048, 00:17:56.734 "data_size": 63488 00:17:56.734 }, 00:17:56.734 { 00:17:56.734 "name": "BaseBdev3", 00:17:56.734 "uuid": "4b49db90-0e2b-5bc7-9f0d-312481031702", 00:17:56.734 "is_configured": true, 00:17:56.734 "data_offset": 2048, 00:17:56.734 "data_size": 63488 00:17:56.734 }, 00:17:56.734 { 00:17:56.734 "name": "BaseBdev4", 00:17:56.734 "uuid": "a3d2a363-c4c3-59ad-954c-37d9fe5b77f7", 00:17:56.734 "is_configured": true, 00:17:56.734 "data_offset": 2048, 00:17:56.734 "data_size": 63488 00:17:56.734 } 00:17:56.734 ] 00:17:56.734 }' 00:17:56.734 19:16:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:56.734 19:16:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:56.734 19:16:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:56.734 19:16:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:56.734 19:16:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:57.676 19:16:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:57.676 19:16:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:57.676 19:16:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:57.676 19:16:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:57.676 19:16:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:57.676 19:16:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:57.676 19:16:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:57.676 19:16:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.676 19:16:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:57.676 19:16:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:57.676 19:16:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.676 19:16:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:57.676 "name": "raid_bdev1", 00:17:57.676 "uuid": "38189b02-96f1-40a0-b253-619f934f552b", 00:17:57.677 "strip_size_kb": 64, 00:17:57.677 "state": "online", 00:17:57.677 "raid_level": "raid5f", 00:17:57.677 "superblock": true, 00:17:57.677 "num_base_bdevs": 4, 00:17:57.677 "num_base_bdevs_discovered": 4, 00:17:57.677 "num_base_bdevs_operational": 4, 00:17:57.677 "process": { 00:17:57.677 "type": "rebuild", 00:17:57.677 "target": "spare", 00:17:57.677 "progress": { 00:17:57.677 "blocks": 153600, 00:17:57.677 "percent": 80 00:17:57.677 } 00:17:57.677 }, 00:17:57.677 "base_bdevs_list": [ 00:17:57.677 { 00:17:57.677 "name": "spare", 00:17:57.677 "uuid": "edcb1069-abed-5ae6-8037-d796586818cb", 00:17:57.677 "is_configured": true, 00:17:57.677 "data_offset": 2048, 00:17:57.677 "data_size": 63488 00:17:57.677 }, 00:17:57.677 { 00:17:57.677 "name": "BaseBdev2", 00:17:57.677 "uuid": "e1cb6cff-b725-5d21-b3c1-b8395aa2426f", 00:17:57.677 "is_configured": true, 00:17:57.677 "data_offset": 2048, 00:17:57.677 "data_size": 63488 00:17:57.677 }, 00:17:57.677 { 00:17:57.677 "name": "BaseBdev3", 00:17:57.677 "uuid": "4b49db90-0e2b-5bc7-9f0d-312481031702", 00:17:57.677 "is_configured": true, 00:17:57.677 "data_offset": 2048, 00:17:57.677 "data_size": 63488 00:17:57.677 }, 00:17:57.677 { 00:17:57.677 "name": "BaseBdev4", 00:17:57.677 "uuid": "a3d2a363-c4c3-59ad-954c-37d9fe5b77f7", 00:17:57.677 "is_configured": true, 00:17:57.677 "data_offset": 2048, 00:17:57.677 "data_size": 63488 00:17:57.677 } 00:17:57.677 ] 00:17:57.677 }' 00:17:57.677 19:16:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:57.677 19:16:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:57.677 19:16:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:57.677 19:16:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:57.677 19:16:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:59.061 19:16:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:59.061 19:16:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:59.061 19:16:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:59.061 19:16:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:59.061 19:16:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:59.061 19:16:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:59.061 19:16:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:59.061 19:16:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:59.061 19:16:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.061 19:16:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:59.061 19:16:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.061 19:16:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:59.061 "name": "raid_bdev1", 00:17:59.061 "uuid": "38189b02-96f1-40a0-b253-619f934f552b", 00:17:59.061 "strip_size_kb": 64, 00:17:59.061 "state": "online", 00:17:59.061 "raid_level": "raid5f", 00:17:59.061 "superblock": true, 00:17:59.061 "num_base_bdevs": 4, 00:17:59.061 "num_base_bdevs_discovered": 4, 00:17:59.061 "num_base_bdevs_operational": 4, 00:17:59.061 "process": { 00:17:59.061 "type": "rebuild", 00:17:59.061 "target": "spare", 00:17:59.061 "progress": { 00:17:59.061 "blocks": 174720, 00:17:59.061 "percent": 91 00:17:59.061 } 00:17:59.061 }, 00:17:59.061 "base_bdevs_list": [ 00:17:59.061 { 00:17:59.061 "name": "spare", 00:17:59.061 "uuid": "edcb1069-abed-5ae6-8037-d796586818cb", 00:17:59.061 "is_configured": true, 00:17:59.061 "data_offset": 2048, 00:17:59.061 "data_size": 63488 00:17:59.061 }, 00:17:59.061 { 00:17:59.061 "name": "BaseBdev2", 00:17:59.061 "uuid": "e1cb6cff-b725-5d21-b3c1-b8395aa2426f", 00:17:59.061 "is_configured": true, 00:17:59.061 "data_offset": 2048, 00:17:59.061 "data_size": 63488 00:17:59.061 }, 00:17:59.061 { 00:17:59.061 "name": "BaseBdev3", 00:17:59.061 "uuid": "4b49db90-0e2b-5bc7-9f0d-312481031702", 00:17:59.061 "is_configured": true, 00:17:59.061 "data_offset": 2048, 00:17:59.061 "data_size": 63488 00:17:59.061 }, 00:17:59.061 { 00:17:59.061 "name": "BaseBdev4", 00:17:59.061 "uuid": "a3d2a363-c4c3-59ad-954c-37d9fe5b77f7", 00:17:59.061 "is_configured": true, 00:17:59.061 "data_offset": 2048, 00:17:59.061 "data_size": 63488 00:17:59.061 } 00:17:59.061 ] 00:17:59.061 }' 00:17:59.061 19:16:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:59.061 19:16:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:59.061 19:16:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:59.061 19:16:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:59.061 19:16:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:59.632 [2024-11-27 19:16:09.104795] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:59.632 [2024-11-27 19:16:09.104860] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:59.632 [2024-11-27 19:16:09.104991] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:59.890 19:16:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:59.890 19:16:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:59.890 19:16:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:59.890 19:16:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:59.890 19:16:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:59.890 19:16:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:59.890 19:16:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:59.890 19:16:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:59.890 19:16:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.890 19:16:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:59.890 19:16:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.890 19:16:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:59.890 "name": "raid_bdev1", 00:17:59.890 "uuid": "38189b02-96f1-40a0-b253-619f934f552b", 00:17:59.890 "strip_size_kb": 64, 00:17:59.891 "state": "online", 00:17:59.891 "raid_level": "raid5f", 00:17:59.891 "superblock": true, 00:17:59.891 "num_base_bdevs": 4, 00:17:59.891 "num_base_bdevs_discovered": 4, 00:17:59.891 "num_base_bdevs_operational": 4, 00:17:59.891 "base_bdevs_list": [ 00:17:59.891 { 00:17:59.891 "name": "spare", 00:17:59.891 "uuid": "edcb1069-abed-5ae6-8037-d796586818cb", 00:17:59.891 "is_configured": true, 00:17:59.891 "data_offset": 2048, 00:17:59.891 "data_size": 63488 00:17:59.891 }, 00:17:59.891 { 00:17:59.891 "name": "BaseBdev2", 00:17:59.891 "uuid": "e1cb6cff-b725-5d21-b3c1-b8395aa2426f", 00:17:59.891 "is_configured": true, 00:17:59.891 "data_offset": 2048, 00:17:59.891 "data_size": 63488 00:17:59.891 }, 00:17:59.891 { 00:17:59.891 "name": "BaseBdev3", 00:17:59.891 "uuid": "4b49db90-0e2b-5bc7-9f0d-312481031702", 00:17:59.891 "is_configured": true, 00:17:59.891 "data_offset": 2048, 00:17:59.891 "data_size": 63488 00:17:59.891 }, 00:17:59.891 { 00:17:59.891 "name": "BaseBdev4", 00:17:59.891 "uuid": "a3d2a363-c4c3-59ad-954c-37d9fe5b77f7", 00:17:59.891 "is_configured": true, 00:17:59.891 "data_offset": 2048, 00:17:59.891 "data_size": 63488 00:17:59.891 } 00:17:59.891 ] 00:17:59.891 }' 00:17:59.891 19:16:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:00.150 19:16:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:18:00.150 19:16:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:00.150 19:16:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:18:00.150 19:16:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:18:00.150 19:16:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:00.150 19:16:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:00.150 19:16:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:00.150 19:16:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:00.150 19:16:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:00.151 19:16:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:00.151 19:16:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:00.151 19:16:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.151 19:16:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:00.151 19:16:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.151 19:16:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:00.151 "name": "raid_bdev1", 00:18:00.151 "uuid": "38189b02-96f1-40a0-b253-619f934f552b", 00:18:00.151 "strip_size_kb": 64, 00:18:00.151 "state": "online", 00:18:00.151 "raid_level": "raid5f", 00:18:00.151 "superblock": true, 00:18:00.151 "num_base_bdevs": 4, 00:18:00.151 "num_base_bdevs_discovered": 4, 00:18:00.151 "num_base_bdevs_operational": 4, 00:18:00.151 "base_bdevs_list": [ 00:18:00.151 { 00:18:00.151 "name": "spare", 00:18:00.151 "uuid": "edcb1069-abed-5ae6-8037-d796586818cb", 00:18:00.151 "is_configured": true, 00:18:00.151 "data_offset": 2048, 00:18:00.151 "data_size": 63488 00:18:00.151 }, 00:18:00.151 { 00:18:00.151 "name": "BaseBdev2", 00:18:00.151 "uuid": "e1cb6cff-b725-5d21-b3c1-b8395aa2426f", 00:18:00.151 "is_configured": true, 00:18:00.151 "data_offset": 2048, 00:18:00.151 "data_size": 63488 00:18:00.151 }, 00:18:00.151 { 00:18:00.151 "name": "BaseBdev3", 00:18:00.151 "uuid": "4b49db90-0e2b-5bc7-9f0d-312481031702", 00:18:00.151 "is_configured": true, 00:18:00.151 "data_offset": 2048, 00:18:00.151 "data_size": 63488 00:18:00.151 }, 00:18:00.151 { 00:18:00.151 "name": "BaseBdev4", 00:18:00.151 "uuid": "a3d2a363-c4c3-59ad-954c-37d9fe5b77f7", 00:18:00.151 "is_configured": true, 00:18:00.151 "data_offset": 2048, 00:18:00.151 "data_size": 63488 00:18:00.151 } 00:18:00.151 ] 00:18:00.151 }' 00:18:00.151 19:16:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:00.151 19:16:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:00.151 19:16:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:00.151 19:16:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:00.151 19:16:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:18:00.151 19:16:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:00.151 19:16:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:00.151 19:16:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:00.151 19:16:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:00.151 19:16:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:00.151 19:16:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:00.151 19:16:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:00.151 19:16:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:00.151 19:16:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:00.151 19:16:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:00.151 19:16:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:00.151 19:16:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.151 19:16:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:00.151 19:16:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.151 19:16:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:00.151 "name": "raid_bdev1", 00:18:00.151 "uuid": "38189b02-96f1-40a0-b253-619f934f552b", 00:18:00.151 "strip_size_kb": 64, 00:18:00.151 "state": "online", 00:18:00.151 "raid_level": "raid5f", 00:18:00.151 "superblock": true, 00:18:00.151 "num_base_bdevs": 4, 00:18:00.151 "num_base_bdevs_discovered": 4, 00:18:00.151 "num_base_bdevs_operational": 4, 00:18:00.151 "base_bdevs_list": [ 00:18:00.151 { 00:18:00.151 "name": "spare", 00:18:00.151 "uuid": "edcb1069-abed-5ae6-8037-d796586818cb", 00:18:00.151 "is_configured": true, 00:18:00.151 "data_offset": 2048, 00:18:00.151 "data_size": 63488 00:18:00.151 }, 00:18:00.151 { 00:18:00.151 "name": "BaseBdev2", 00:18:00.151 "uuid": "e1cb6cff-b725-5d21-b3c1-b8395aa2426f", 00:18:00.151 "is_configured": true, 00:18:00.151 "data_offset": 2048, 00:18:00.151 "data_size": 63488 00:18:00.151 }, 00:18:00.151 { 00:18:00.151 "name": "BaseBdev3", 00:18:00.151 "uuid": "4b49db90-0e2b-5bc7-9f0d-312481031702", 00:18:00.151 "is_configured": true, 00:18:00.151 "data_offset": 2048, 00:18:00.151 "data_size": 63488 00:18:00.151 }, 00:18:00.151 { 00:18:00.151 "name": "BaseBdev4", 00:18:00.151 "uuid": "a3d2a363-c4c3-59ad-954c-37d9fe5b77f7", 00:18:00.151 "is_configured": true, 00:18:00.151 "data_offset": 2048, 00:18:00.151 "data_size": 63488 00:18:00.151 } 00:18:00.151 ] 00:18:00.151 }' 00:18:00.151 19:16:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:00.151 19:16:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:00.722 19:16:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:00.722 19:16:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.722 19:16:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:00.722 [2024-11-27 19:16:10.195928] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:00.722 [2024-11-27 19:16:10.195961] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:00.722 [2024-11-27 19:16:10.196034] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:00.722 [2024-11-27 19:16:10.196126] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:00.722 [2024-11-27 19:16:10.196151] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:00.722 19:16:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.722 19:16:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:00.722 19:16:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:18:00.722 19:16:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.722 19:16:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:00.722 19:16:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.722 19:16:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:18:00.722 19:16:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:18:00.722 19:16:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:18:00.722 19:16:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:18:00.722 19:16:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:00.722 19:16:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:18:00.722 19:16:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:00.722 19:16:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:00.722 19:16:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:00.722 19:16:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:18:00.722 19:16:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:00.722 19:16:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:00.722 19:16:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:18:00.983 /dev/nbd0 00:18:00.983 19:16:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:00.983 19:16:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:00.983 19:16:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:00.983 19:16:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:18:00.983 19:16:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:00.983 19:16:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:00.983 19:16:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:00.983 19:16:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:18:00.983 19:16:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:00.983 19:16:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:00.983 19:16:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:00.983 1+0 records in 00:18:00.983 1+0 records out 00:18:00.983 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000405625 s, 10.1 MB/s 00:18:00.983 19:16:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:00.983 19:16:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:18:00.983 19:16:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:00.983 19:16:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:00.983 19:16:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:18:00.983 19:16:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:00.983 19:16:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:00.983 19:16:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:18:01.244 /dev/nbd1 00:18:01.244 19:16:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:01.244 19:16:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:01.244 19:16:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:18:01.244 19:16:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:18:01.244 19:16:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:01.244 19:16:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:01.244 19:16:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:18:01.244 19:16:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:18:01.244 19:16:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:01.244 19:16:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:01.244 19:16:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:01.244 1+0 records in 00:18:01.244 1+0 records out 00:18:01.244 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000568506 s, 7.2 MB/s 00:18:01.244 19:16:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:01.244 19:16:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:18:01.244 19:16:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:01.244 19:16:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:01.244 19:16:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:18:01.244 19:16:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:01.244 19:16:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:01.244 19:16:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:18:01.504 19:16:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:18:01.504 19:16:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:01.504 19:16:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:01.504 19:16:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:01.504 19:16:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:18:01.504 19:16:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:01.504 19:16:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:01.504 19:16:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:01.504 19:16:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:01.504 19:16:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:01.504 19:16:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:01.504 19:16:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:01.504 19:16:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:01.504 19:16:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:18:01.504 19:16:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:18:01.504 19:16:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:01.504 19:16:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:18:01.765 19:16:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:01.765 19:16:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:01.765 19:16:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:01.765 19:16:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:01.765 19:16:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:01.765 19:16:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:01.765 19:16:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:18:01.765 19:16:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:18:01.765 19:16:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:18:01.765 19:16:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:18:01.765 19:16:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.765 19:16:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:01.765 19:16:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.765 19:16:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:01.765 19:16:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.765 19:16:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:01.765 [2024-11-27 19:16:11.363783] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:01.765 [2024-11-27 19:16:11.363836] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:01.765 [2024-11-27 19:16:11.363857] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:18:01.765 [2024-11-27 19:16:11.363865] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:01.765 [2024-11-27 19:16:11.366245] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:01.765 [2024-11-27 19:16:11.366281] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:01.765 [2024-11-27 19:16:11.366365] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:01.765 [2024-11-27 19:16:11.366419] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:01.765 [2024-11-27 19:16:11.366541] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:01.765 [2024-11-27 19:16:11.366625] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:01.765 [2024-11-27 19:16:11.366719] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:01.765 spare 00:18:01.765 19:16:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.765 19:16:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:18:01.765 19:16:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.765 19:16:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:02.026 [2024-11-27 19:16:11.466610] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:18:02.026 [2024-11-27 19:16:11.466638] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:18:02.026 [2024-11-27 19:16:11.466895] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000491d0 00:18:02.026 [2024-11-27 19:16:11.473503] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:18:02.026 [2024-11-27 19:16:11.473525] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:18:02.026 [2024-11-27 19:16:11.473682] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:02.026 19:16:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.026 19:16:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:18:02.026 19:16:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:02.026 19:16:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:02.026 19:16:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:02.026 19:16:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:02.026 19:16:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:02.026 19:16:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:02.026 19:16:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:02.026 19:16:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:02.026 19:16:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:02.026 19:16:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:02.026 19:16:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:02.026 19:16:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.026 19:16:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:02.026 19:16:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.026 19:16:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:02.026 "name": "raid_bdev1", 00:18:02.026 "uuid": "38189b02-96f1-40a0-b253-619f934f552b", 00:18:02.026 "strip_size_kb": 64, 00:18:02.026 "state": "online", 00:18:02.026 "raid_level": "raid5f", 00:18:02.026 "superblock": true, 00:18:02.026 "num_base_bdevs": 4, 00:18:02.026 "num_base_bdevs_discovered": 4, 00:18:02.026 "num_base_bdevs_operational": 4, 00:18:02.026 "base_bdevs_list": [ 00:18:02.026 { 00:18:02.026 "name": "spare", 00:18:02.026 "uuid": "edcb1069-abed-5ae6-8037-d796586818cb", 00:18:02.026 "is_configured": true, 00:18:02.026 "data_offset": 2048, 00:18:02.026 "data_size": 63488 00:18:02.026 }, 00:18:02.026 { 00:18:02.026 "name": "BaseBdev2", 00:18:02.026 "uuid": "e1cb6cff-b725-5d21-b3c1-b8395aa2426f", 00:18:02.026 "is_configured": true, 00:18:02.026 "data_offset": 2048, 00:18:02.026 "data_size": 63488 00:18:02.026 }, 00:18:02.026 { 00:18:02.026 "name": "BaseBdev3", 00:18:02.026 "uuid": "4b49db90-0e2b-5bc7-9f0d-312481031702", 00:18:02.026 "is_configured": true, 00:18:02.026 "data_offset": 2048, 00:18:02.026 "data_size": 63488 00:18:02.026 }, 00:18:02.026 { 00:18:02.026 "name": "BaseBdev4", 00:18:02.026 "uuid": "a3d2a363-c4c3-59ad-954c-37d9fe5b77f7", 00:18:02.026 "is_configured": true, 00:18:02.026 "data_offset": 2048, 00:18:02.026 "data_size": 63488 00:18:02.026 } 00:18:02.026 ] 00:18:02.026 }' 00:18:02.026 19:16:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:02.026 19:16:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:02.598 19:16:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:02.598 19:16:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:02.598 19:16:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:02.598 19:16:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:02.598 19:16:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:02.598 19:16:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:02.598 19:16:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:02.598 19:16:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.598 19:16:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:02.598 19:16:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.598 19:16:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:02.598 "name": "raid_bdev1", 00:18:02.598 "uuid": "38189b02-96f1-40a0-b253-619f934f552b", 00:18:02.598 "strip_size_kb": 64, 00:18:02.598 "state": "online", 00:18:02.598 "raid_level": "raid5f", 00:18:02.598 "superblock": true, 00:18:02.598 "num_base_bdevs": 4, 00:18:02.598 "num_base_bdevs_discovered": 4, 00:18:02.598 "num_base_bdevs_operational": 4, 00:18:02.598 "base_bdevs_list": [ 00:18:02.598 { 00:18:02.598 "name": "spare", 00:18:02.598 "uuid": "edcb1069-abed-5ae6-8037-d796586818cb", 00:18:02.598 "is_configured": true, 00:18:02.598 "data_offset": 2048, 00:18:02.598 "data_size": 63488 00:18:02.598 }, 00:18:02.598 { 00:18:02.598 "name": "BaseBdev2", 00:18:02.598 "uuid": "e1cb6cff-b725-5d21-b3c1-b8395aa2426f", 00:18:02.598 "is_configured": true, 00:18:02.598 "data_offset": 2048, 00:18:02.598 "data_size": 63488 00:18:02.598 }, 00:18:02.598 { 00:18:02.598 "name": "BaseBdev3", 00:18:02.598 "uuid": "4b49db90-0e2b-5bc7-9f0d-312481031702", 00:18:02.598 "is_configured": true, 00:18:02.598 "data_offset": 2048, 00:18:02.598 "data_size": 63488 00:18:02.598 }, 00:18:02.598 { 00:18:02.598 "name": "BaseBdev4", 00:18:02.598 "uuid": "a3d2a363-c4c3-59ad-954c-37d9fe5b77f7", 00:18:02.598 "is_configured": true, 00:18:02.598 "data_offset": 2048, 00:18:02.598 "data_size": 63488 00:18:02.598 } 00:18:02.598 ] 00:18:02.598 }' 00:18:02.598 19:16:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:02.598 19:16:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:02.598 19:16:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:02.598 19:16:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:02.598 19:16:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:02.598 19:16:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:18:02.598 19:16:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.598 19:16:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:02.598 19:16:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.598 19:16:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:18:02.598 19:16:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:02.598 19:16:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.598 19:16:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:02.598 [2024-11-27 19:16:12.144683] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:02.598 19:16:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.598 19:16:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:02.598 19:16:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:02.598 19:16:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:02.598 19:16:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:02.598 19:16:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:02.598 19:16:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:02.598 19:16:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:02.598 19:16:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:02.598 19:16:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:02.598 19:16:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:02.598 19:16:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:02.598 19:16:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:02.598 19:16:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.598 19:16:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:02.598 19:16:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.598 19:16:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:02.598 "name": "raid_bdev1", 00:18:02.598 "uuid": "38189b02-96f1-40a0-b253-619f934f552b", 00:18:02.598 "strip_size_kb": 64, 00:18:02.598 "state": "online", 00:18:02.598 "raid_level": "raid5f", 00:18:02.598 "superblock": true, 00:18:02.598 "num_base_bdevs": 4, 00:18:02.598 "num_base_bdevs_discovered": 3, 00:18:02.598 "num_base_bdevs_operational": 3, 00:18:02.598 "base_bdevs_list": [ 00:18:02.598 { 00:18:02.598 "name": null, 00:18:02.598 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:02.598 "is_configured": false, 00:18:02.598 "data_offset": 0, 00:18:02.598 "data_size": 63488 00:18:02.598 }, 00:18:02.598 { 00:18:02.598 "name": "BaseBdev2", 00:18:02.598 "uuid": "e1cb6cff-b725-5d21-b3c1-b8395aa2426f", 00:18:02.598 "is_configured": true, 00:18:02.598 "data_offset": 2048, 00:18:02.598 "data_size": 63488 00:18:02.598 }, 00:18:02.598 { 00:18:02.598 "name": "BaseBdev3", 00:18:02.598 "uuid": "4b49db90-0e2b-5bc7-9f0d-312481031702", 00:18:02.598 "is_configured": true, 00:18:02.599 "data_offset": 2048, 00:18:02.599 "data_size": 63488 00:18:02.599 }, 00:18:02.599 { 00:18:02.599 "name": "BaseBdev4", 00:18:02.599 "uuid": "a3d2a363-c4c3-59ad-954c-37d9fe5b77f7", 00:18:02.599 "is_configured": true, 00:18:02.599 "data_offset": 2048, 00:18:02.599 "data_size": 63488 00:18:02.599 } 00:18:02.599 ] 00:18:02.599 }' 00:18:02.599 19:16:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:02.599 19:16:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:03.169 19:16:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:03.169 19:16:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.169 19:16:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:03.169 [2024-11-27 19:16:12.623868] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:03.169 [2024-11-27 19:16:12.624000] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:03.169 [2024-11-27 19:16:12.624019] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:03.169 [2024-11-27 19:16:12.624051] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:03.169 [2024-11-27 19:16:12.638203] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000492a0 00:18:03.169 19:16:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.169 19:16:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:18:03.169 [2024-11-27 19:16:12.646942] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:04.108 19:16:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:04.108 19:16:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:04.108 19:16:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:04.108 19:16:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:04.108 19:16:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:04.108 19:16:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:04.108 19:16:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.108 19:16:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:04.108 19:16:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:04.108 19:16:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.108 19:16:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:04.108 "name": "raid_bdev1", 00:18:04.108 "uuid": "38189b02-96f1-40a0-b253-619f934f552b", 00:18:04.108 "strip_size_kb": 64, 00:18:04.108 "state": "online", 00:18:04.108 "raid_level": "raid5f", 00:18:04.108 "superblock": true, 00:18:04.108 "num_base_bdevs": 4, 00:18:04.108 "num_base_bdevs_discovered": 4, 00:18:04.108 "num_base_bdevs_operational": 4, 00:18:04.108 "process": { 00:18:04.108 "type": "rebuild", 00:18:04.108 "target": "spare", 00:18:04.108 "progress": { 00:18:04.108 "blocks": 19200, 00:18:04.108 "percent": 10 00:18:04.108 } 00:18:04.108 }, 00:18:04.108 "base_bdevs_list": [ 00:18:04.108 { 00:18:04.108 "name": "spare", 00:18:04.108 "uuid": "edcb1069-abed-5ae6-8037-d796586818cb", 00:18:04.108 "is_configured": true, 00:18:04.108 "data_offset": 2048, 00:18:04.108 "data_size": 63488 00:18:04.108 }, 00:18:04.108 { 00:18:04.108 "name": "BaseBdev2", 00:18:04.108 "uuid": "e1cb6cff-b725-5d21-b3c1-b8395aa2426f", 00:18:04.108 "is_configured": true, 00:18:04.108 "data_offset": 2048, 00:18:04.108 "data_size": 63488 00:18:04.108 }, 00:18:04.108 { 00:18:04.108 "name": "BaseBdev3", 00:18:04.108 "uuid": "4b49db90-0e2b-5bc7-9f0d-312481031702", 00:18:04.108 "is_configured": true, 00:18:04.108 "data_offset": 2048, 00:18:04.108 "data_size": 63488 00:18:04.108 }, 00:18:04.108 { 00:18:04.108 "name": "BaseBdev4", 00:18:04.108 "uuid": "a3d2a363-c4c3-59ad-954c-37d9fe5b77f7", 00:18:04.108 "is_configured": true, 00:18:04.108 "data_offset": 2048, 00:18:04.108 "data_size": 63488 00:18:04.108 } 00:18:04.108 ] 00:18:04.108 }' 00:18:04.108 19:16:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:04.368 19:16:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:04.368 19:16:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:04.368 19:16:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:04.368 19:16:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:18:04.368 19:16:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.368 19:16:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:04.368 [2024-11-27 19:16:13.777709] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:04.368 [2024-11-27 19:16:13.852414] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:04.368 [2024-11-27 19:16:13.852524] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:04.368 [2024-11-27 19:16:13.852542] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:04.368 [2024-11-27 19:16:13.852551] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:04.368 19:16:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.368 19:16:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:04.368 19:16:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:04.368 19:16:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:04.368 19:16:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:04.368 19:16:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:04.368 19:16:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:04.368 19:16:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:04.368 19:16:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:04.368 19:16:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:04.368 19:16:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:04.368 19:16:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:04.368 19:16:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:04.368 19:16:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.368 19:16:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:04.368 19:16:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.368 19:16:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:04.368 "name": "raid_bdev1", 00:18:04.368 "uuid": "38189b02-96f1-40a0-b253-619f934f552b", 00:18:04.368 "strip_size_kb": 64, 00:18:04.368 "state": "online", 00:18:04.368 "raid_level": "raid5f", 00:18:04.368 "superblock": true, 00:18:04.368 "num_base_bdevs": 4, 00:18:04.368 "num_base_bdevs_discovered": 3, 00:18:04.368 "num_base_bdevs_operational": 3, 00:18:04.368 "base_bdevs_list": [ 00:18:04.368 { 00:18:04.368 "name": null, 00:18:04.368 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:04.368 "is_configured": false, 00:18:04.368 "data_offset": 0, 00:18:04.368 "data_size": 63488 00:18:04.368 }, 00:18:04.368 { 00:18:04.368 "name": "BaseBdev2", 00:18:04.368 "uuid": "e1cb6cff-b725-5d21-b3c1-b8395aa2426f", 00:18:04.368 "is_configured": true, 00:18:04.368 "data_offset": 2048, 00:18:04.368 "data_size": 63488 00:18:04.368 }, 00:18:04.368 { 00:18:04.368 "name": "BaseBdev3", 00:18:04.368 "uuid": "4b49db90-0e2b-5bc7-9f0d-312481031702", 00:18:04.368 "is_configured": true, 00:18:04.368 "data_offset": 2048, 00:18:04.368 "data_size": 63488 00:18:04.368 }, 00:18:04.368 { 00:18:04.368 "name": "BaseBdev4", 00:18:04.368 "uuid": "a3d2a363-c4c3-59ad-954c-37d9fe5b77f7", 00:18:04.368 "is_configured": true, 00:18:04.368 "data_offset": 2048, 00:18:04.368 "data_size": 63488 00:18:04.368 } 00:18:04.368 ] 00:18:04.368 }' 00:18:04.368 19:16:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:04.368 19:16:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:04.939 19:16:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:04.939 19:16:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.939 19:16:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:04.939 [2024-11-27 19:16:14.320416] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:04.939 [2024-11-27 19:16:14.320517] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:04.939 [2024-11-27 19:16:14.320560] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:18:04.939 [2024-11-27 19:16:14.320592] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:04.939 [2024-11-27 19:16:14.321091] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:04.939 [2024-11-27 19:16:14.321161] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:04.939 [2024-11-27 19:16:14.321269] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:04.939 [2024-11-27 19:16:14.321302] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:04.939 [2024-11-27 19:16:14.321346] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:04.939 [2024-11-27 19:16:14.321388] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:04.939 [2024-11-27 19:16:14.335940] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000049370 00:18:04.939 spare 00:18:04.939 19:16:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.939 19:16:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:18:04.939 [2024-11-27 19:16:14.343863] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:05.881 19:16:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:05.881 19:16:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:05.881 19:16:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:05.881 19:16:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:05.881 19:16:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:05.881 19:16:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:05.881 19:16:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:05.881 19:16:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.881 19:16:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:05.881 19:16:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.881 19:16:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:05.881 "name": "raid_bdev1", 00:18:05.881 "uuid": "38189b02-96f1-40a0-b253-619f934f552b", 00:18:05.881 "strip_size_kb": 64, 00:18:05.881 "state": "online", 00:18:05.881 "raid_level": "raid5f", 00:18:05.881 "superblock": true, 00:18:05.881 "num_base_bdevs": 4, 00:18:05.881 "num_base_bdevs_discovered": 4, 00:18:05.881 "num_base_bdevs_operational": 4, 00:18:05.881 "process": { 00:18:05.881 "type": "rebuild", 00:18:05.881 "target": "spare", 00:18:05.881 "progress": { 00:18:05.881 "blocks": 19200, 00:18:05.881 "percent": 10 00:18:05.881 } 00:18:05.881 }, 00:18:05.881 "base_bdevs_list": [ 00:18:05.881 { 00:18:05.881 "name": "spare", 00:18:05.881 "uuid": "edcb1069-abed-5ae6-8037-d796586818cb", 00:18:05.881 "is_configured": true, 00:18:05.881 "data_offset": 2048, 00:18:05.881 "data_size": 63488 00:18:05.881 }, 00:18:05.881 { 00:18:05.881 "name": "BaseBdev2", 00:18:05.881 "uuid": "e1cb6cff-b725-5d21-b3c1-b8395aa2426f", 00:18:05.881 "is_configured": true, 00:18:05.881 "data_offset": 2048, 00:18:05.881 "data_size": 63488 00:18:05.881 }, 00:18:05.881 { 00:18:05.881 "name": "BaseBdev3", 00:18:05.881 "uuid": "4b49db90-0e2b-5bc7-9f0d-312481031702", 00:18:05.881 "is_configured": true, 00:18:05.881 "data_offset": 2048, 00:18:05.881 "data_size": 63488 00:18:05.881 }, 00:18:05.881 { 00:18:05.881 "name": "BaseBdev4", 00:18:05.881 "uuid": "a3d2a363-c4c3-59ad-954c-37d9fe5b77f7", 00:18:05.881 "is_configured": true, 00:18:05.881 "data_offset": 2048, 00:18:05.881 "data_size": 63488 00:18:05.881 } 00:18:05.881 ] 00:18:05.881 }' 00:18:05.881 19:16:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:05.881 19:16:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:05.881 19:16:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:05.881 19:16:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:05.881 19:16:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:18:05.881 19:16:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.881 19:16:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:05.881 [2024-11-27 19:16:15.478567] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:06.141 [2024-11-27 19:16:15.549330] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:06.141 [2024-11-27 19:16:15.549432] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:06.141 [2024-11-27 19:16:15.549453] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:06.141 [2024-11-27 19:16:15.549461] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:06.141 19:16:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.141 19:16:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:06.141 19:16:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:06.141 19:16:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:06.142 19:16:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:06.142 19:16:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:06.142 19:16:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:06.142 19:16:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:06.142 19:16:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:06.142 19:16:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:06.142 19:16:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:06.142 19:16:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:06.142 19:16:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.142 19:16:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:06.142 19:16:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:06.142 19:16:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.142 19:16:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:06.142 "name": "raid_bdev1", 00:18:06.142 "uuid": "38189b02-96f1-40a0-b253-619f934f552b", 00:18:06.142 "strip_size_kb": 64, 00:18:06.142 "state": "online", 00:18:06.142 "raid_level": "raid5f", 00:18:06.142 "superblock": true, 00:18:06.142 "num_base_bdevs": 4, 00:18:06.142 "num_base_bdevs_discovered": 3, 00:18:06.142 "num_base_bdevs_operational": 3, 00:18:06.142 "base_bdevs_list": [ 00:18:06.142 { 00:18:06.142 "name": null, 00:18:06.142 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:06.142 "is_configured": false, 00:18:06.142 "data_offset": 0, 00:18:06.142 "data_size": 63488 00:18:06.142 }, 00:18:06.142 { 00:18:06.142 "name": "BaseBdev2", 00:18:06.142 "uuid": "e1cb6cff-b725-5d21-b3c1-b8395aa2426f", 00:18:06.142 "is_configured": true, 00:18:06.142 "data_offset": 2048, 00:18:06.142 "data_size": 63488 00:18:06.142 }, 00:18:06.142 { 00:18:06.142 "name": "BaseBdev3", 00:18:06.142 "uuid": "4b49db90-0e2b-5bc7-9f0d-312481031702", 00:18:06.142 "is_configured": true, 00:18:06.142 "data_offset": 2048, 00:18:06.142 "data_size": 63488 00:18:06.142 }, 00:18:06.142 { 00:18:06.142 "name": "BaseBdev4", 00:18:06.142 "uuid": "a3d2a363-c4c3-59ad-954c-37d9fe5b77f7", 00:18:06.142 "is_configured": true, 00:18:06.142 "data_offset": 2048, 00:18:06.142 "data_size": 63488 00:18:06.142 } 00:18:06.142 ] 00:18:06.142 }' 00:18:06.142 19:16:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:06.142 19:16:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:06.712 19:16:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:06.712 19:16:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:06.712 19:16:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:06.712 19:16:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:06.712 19:16:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:06.712 19:16:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:06.712 19:16:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:06.712 19:16:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.712 19:16:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:06.712 19:16:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.712 19:16:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:06.712 "name": "raid_bdev1", 00:18:06.712 "uuid": "38189b02-96f1-40a0-b253-619f934f552b", 00:18:06.712 "strip_size_kb": 64, 00:18:06.712 "state": "online", 00:18:06.712 "raid_level": "raid5f", 00:18:06.712 "superblock": true, 00:18:06.712 "num_base_bdevs": 4, 00:18:06.712 "num_base_bdevs_discovered": 3, 00:18:06.712 "num_base_bdevs_operational": 3, 00:18:06.712 "base_bdevs_list": [ 00:18:06.712 { 00:18:06.712 "name": null, 00:18:06.712 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:06.712 "is_configured": false, 00:18:06.712 "data_offset": 0, 00:18:06.712 "data_size": 63488 00:18:06.712 }, 00:18:06.712 { 00:18:06.712 "name": "BaseBdev2", 00:18:06.712 "uuid": "e1cb6cff-b725-5d21-b3c1-b8395aa2426f", 00:18:06.712 "is_configured": true, 00:18:06.712 "data_offset": 2048, 00:18:06.712 "data_size": 63488 00:18:06.712 }, 00:18:06.712 { 00:18:06.712 "name": "BaseBdev3", 00:18:06.712 "uuid": "4b49db90-0e2b-5bc7-9f0d-312481031702", 00:18:06.712 "is_configured": true, 00:18:06.712 "data_offset": 2048, 00:18:06.712 "data_size": 63488 00:18:06.712 }, 00:18:06.712 { 00:18:06.712 "name": "BaseBdev4", 00:18:06.712 "uuid": "a3d2a363-c4c3-59ad-954c-37d9fe5b77f7", 00:18:06.712 "is_configured": true, 00:18:06.712 "data_offset": 2048, 00:18:06.712 "data_size": 63488 00:18:06.712 } 00:18:06.712 ] 00:18:06.712 }' 00:18:06.712 19:16:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:06.712 19:16:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:06.712 19:16:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:06.712 19:16:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:06.712 19:16:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:18:06.712 19:16:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.712 19:16:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:06.712 19:16:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.712 19:16:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:06.712 19:16:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.712 19:16:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:06.712 [2024-11-27 19:16:16.224827] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:06.712 [2024-11-27 19:16:16.224919] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:06.712 [2024-11-27 19:16:16.224974] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:18:06.712 [2024-11-27 19:16:16.225003] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:06.712 [2024-11-27 19:16:16.225463] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:06.712 [2024-11-27 19:16:16.225523] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:06.712 [2024-11-27 19:16:16.225625] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:18:06.712 [2024-11-27 19:16:16.225665] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:06.712 [2024-11-27 19:16:16.225737] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:06.712 [2024-11-27 19:16:16.225769] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:18:06.712 BaseBdev1 00:18:06.712 19:16:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.712 19:16:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:18:07.652 19:16:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:07.652 19:16:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:07.652 19:16:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:07.652 19:16:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:07.652 19:16:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:07.652 19:16:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:07.652 19:16:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:07.652 19:16:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:07.652 19:16:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:07.652 19:16:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:07.652 19:16:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:07.652 19:16:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:07.652 19:16:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.652 19:16:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:07.652 19:16:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.912 19:16:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:07.912 "name": "raid_bdev1", 00:18:07.912 "uuid": "38189b02-96f1-40a0-b253-619f934f552b", 00:18:07.912 "strip_size_kb": 64, 00:18:07.912 "state": "online", 00:18:07.912 "raid_level": "raid5f", 00:18:07.912 "superblock": true, 00:18:07.912 "num_base_bdevs": 4, 00:18:07.912 "num_base_bdevs_discovered": 3, 00:18:07.912 "num_base_bdevs_operational": 3, 00:18:07.912 "base_bdevs_list": [ 00:18:07.912 { 00:18:07.912 "name": null, 00:18:07.912 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:07.912 "is_configured": false, 00:18:07.912 "data_offset": 0, 00:18:07.912 "data_size": 63488 00:18:07.912 }, 00:18:07.912 { 00:18:07.912 "name": "BaseBdev2", 00:18:07.912 "uuid": "e1cb6cff-b725-5d21-b3c1-b8395aa2426f", 00:18:07.912 "is_configured": true, 00:18:07.912 "data_offset": 2048, 00:18:07.912 "data_size": 63488 00:18:07.912 }, 00:18:07.912 { 00:18:07.912 "name": "BaseBdev3", 00:18:07.912 "uuid": "4b49db90-0e2b-5bc7-9f0d-312481031702", 00:18:07.912 "is_configured": true, 00:18:07.912 "data_offset": 2048, 00:18:07.912 "data_size": 63488 00:18:07.912 }, 00:18:07.912 { 00:18:07.912 "name": "BaseBdev4", 00:18:07.912 "uuid": "a3d2a363-c4c3-59ad-954c-37d9fe5b77f7", 00:18:07.912 "is_configured": true, 00:18:07.912 "data_offset": 2048, 00:18:07.912 "data_size": 63488 00:18:07.912 } 00:18:07.912 ] 00:18:07.912 }' 00:18:07.912 19:16:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:07.912 19:16:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:08.172 19:16:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:08.172 19:16:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:08.172 19:16:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:08.172 19:16:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:08.172 19:16:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:08.172 19:16:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:08.172 19:16:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:08.172 19:16:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.172 19:16:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:08.172 19:16:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.172 19:16:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:08.172 "name": "raid_bdev1", 00:18:08.172 "uuid": "38189b02-96f1-40a0-b253-619f934f552b", 00:18:08.172 "strip_size_kb": 64, 00:18:08.172 "state": "online", 00:18:08.172 "raid_level": "raid5f", 00:18:08.172 "superblock": true, 00:18:08.172 "num_base_bdevs": 4, 00:18:08.172 "num_base_bdevs_discovered": 3, 00:18:08.172 "num_base_bdevs_operational": 3, 00:18:08.172 "base_bdevs_list": [ 00:18:08.172 { 00:18:08.172 "name": null, 00:18:08.172 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:08.172 "is_configured": false, 00:18:08.172 "data_offset": 0, 00:18:08.172 "data_size": 63488 00:18:08.172 }, 00:18:08.172 { 00:18:08.172 "name": "BaseBdev2", 00:18:08.172 "uuid": "e1cb6cff-b725-5d21-b3c1-b8395aa2426f", 00:18:08.172 "is_configured": true, 00:18:08.172 "data_offset": 2048, 00:18:08.172 "data_size": 63488 00:18:08.172 }, 00:18:08.172 { 00:18:08.172 "name": "BaseBdev3", 00:18:08.172 "uuid": "4b49db90-0e2b-5bc7-9f0d-312481031702", 00:18:08.172 "is_configured": true, 00:18:08.172 "data_offset": 2048, 00:18:08.172 "data_size": 63488 00:18:08.172 }, 00:18:08.172 { 00:18:08.172 "name": "BaseBdev4", 00:18:08.172 "uuid": "a3d2a363-c4c3-59ad-954c-37d9fe5b77f7", 00:18:08.172 "is_configured": true, 00:18:08.172 "data_offset": 2048, 00:18:08.172 "data_size": 63488 00:18:08.172 } 00:18:08.172 ] 00:18:08.172 }' 00:18:08.172 19:16:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:08.172 19:16:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:08.172 19:16:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:08.432 19:16:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:08.432 19:16:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:08.432 19:16:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:18:08.432 19:16:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:08.432 19:16:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:08.432 19:16:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:08.432 19:16:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:08.432 19:16:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:08.432 19:16:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:08.432 19:16:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.432 19:16:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:08.432 [2024-11-27 19:16:17.850066] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:08.432 [2024-11-27 19:16:17.850238] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:08.432 [2024-11-27 19:16:17.850255] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:08.432 request: 00:18:08.432 { 00:18:08.432 "base_bdev": "BaseBdev1", 00:18:08.432 "raid_bdev": "raid_bdev1", 00:18:08.432 "method": "bdev_raid_add_base_bdev", 00:18:08.432 "req_id": 1 00:18:08.432 } 00:18:08.432 Got JSON-RPC error response 00:18:08.432 response: 00:18:08.432 { 00:18:08.432 "code": -22, 00:18:08.432 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:18:08.432 } 00:18:08.432 19:16:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:08.432 19:16:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:18:08.432 19:16:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:08.432 19:16:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:08.432 19:16:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:08.432 19:16:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:18:09.372 19:16:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:09.372 19:16:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:09.372 19:16:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:09.372 19:16:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:09.372 19:16:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:09.372 19:16:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:09.372 19:16:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:09.372 19:16:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:09.372 19:16:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:09.372 19:16:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:09.372 19:16:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:09.372 19:16:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:09.372 19:16:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.372 19:16:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:09.372 19:16:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.372 19:16:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:09.372 "name": "raid_bdev1", 00:18:09.372 "uuid": "38189b02-96f1-40a0-b253-619f934f552b", 00:18:09.372 "strip_size_kb": 64, 00:18:09.372 "state": "online", 00:18:09.372 "raid_level": "raid5f", 00:18:09.372 "superblock": true, 00:18:09.372 "num_base_bdevs": 4, 00:18:09.372 "num_base_bdevs_discovered": 3, 00:18:09.372 "num_base_bdevs_operational": 3, 00:18:09.372 "base_bdevs_list": [ 00:18:09.372 { 00:18:09.372 "name": null, 00:18:09.372 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:09.372 "is_configured": false, 00:18:09.372 "data_offset": 0, 00:18:09.372 "data_size": 63488 00:18:09.372 }, 00:18:09.372 { 00:18:09.372 "name": "BaseBdev2", 00:18:09.372 "uuid": "e1cb6cff-b725-5d21-b3c1-b8395aa2426f", 00:18:09.372 "is_configured": true, 00:18:09.372 "data_offset": 2048, 00:18:09.372 "data_size": 63488 00:18:09.372 }, 00:18:09.372 { 00:18:09.372 "name": "BaseBdev3", 00:18:09.372 "uuid": "4b49db90-0e2b-5bc7-9f0d-312481031702", 00:18:09.372 "is_configured": true, 00:18:09.372 "data_offset": 2048, 00:18:09.372 "data_size": 63488 00:18:09.372 }, 00:18:09.372 { 00:18:09.372 "name": "BaseBdev4", 00:18:09.372 "uuid": "a3d2a363-c4c3-59ad-954c-37d9fe5b77f7", 00:18:09.372 "is_configured": true, 00:18:09.372 "data_offset": 2048, 00:18:09.372 "data_size": 63488 00:18:09.372 } 00:18:09.372 ] 00:18:09.372 }' 00:18:09.372 19:16:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:09.372 19:16:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:09.942 19:16:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:09.942 19:16:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:09.942 19:16:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:09.942 19:16:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:09.942 19:16:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:09.942 19:16:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:09.942 19:16:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.942 19:16:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:09.942 19:16:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:09.942 19:16:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.942 19:16:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:09.942 "name": "raid_bdev1", 00:18:09.942 "uuid": "38189b02-96f1-40a0-b253-619f934f552b", 00:18:09.942 "strip_size_kb": 64, 00:18:09.942 "state": "online", 00:18:09.942 "raid_level": "raid5f", 00:18:09.942 "superblock": true, 00:18:09.942 "num_base_bdevs": 4, 00:18:09.942 "num_base_bdevs_discovered": 3, 00:18:09.942 "num_base_bdevs_operational": 3, 00:18:09.942 "base_bdevs_list": [ 00:18:09.942 { 00:18:09.942 "name": null, 00:18:09.942 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:09.942 "is_configured": false, 00:18:09.942 "data_offset": 0, 00:18:09.942 "data_size": 63488 00:18:09.942 }, 00:18:09.942 { 00:18:09.942 "name": "BaseBdev2", 00:18:09.942 "uuid": "e1cb6cff-b725-5d21-b3c1-b8395aa2426f", 00:18:09.942 "is_configured": true, 00:18:09.942 "data_offset": 2048, 00:18:09.942 "data_size": 63488 00:18:09.942 }, 00:18:09.942 { 00:18:09.942 "name": "BaseBdev3", 00:18:09.942 "uuid": "4b49db90-0e2b-5bc7-9f0d-312481031702", 00:18:09.942 "is_configured": true, 00:18:09.942 "data_offset": 2048, 00:18:09.942 "data_size": 63488 00:18:09.942 }, 00:18:09.942 { 00:18:09.942 "name": "BaseBdev4", 00:18:09.942 "uuid": "a3d2a363-c4c3-59ad-954c-37d9fe5b77f7", 00:18:09.942 "is_configured": true, 00:18:09.942 "data_offset": 2048, 00:18:09.942 "data_size": 63488 00:18:09.942 } 00:18:09.942 ] 00:18:09.942 }' 00:18:09.942 19:16:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:09.942 19:16:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:09.942 19:16:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:09.942 19:16:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:09.942 19:16:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 85186 00:18:09.942 19:16:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 85186 ']' 00:18:09.942 19:16:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 85186 00:18:09.942 19:16:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:18:09.942 19:16:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:09.942 19:16:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85186 00:18:09.942 killing process with pid 85186 00:18:09.942 Received shutdown signal, test time was about 60.000000 seconds 00:18:09.942 00:18:09.942 Latency(us) 00:18:09.942 [2024-11-27T19:16:19.578Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:09.942 [2024-11-27T19:16:19.578Z] =================================================================================================================== 00:18:09.942 [2024-11-27T19:16:19.578Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:09.942 19:16:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:09.943 19:16:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:09.943 19:16:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85186' 00:18:09.943 19:16:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 85186 00:18:09.943 [2024-11-27 19:16:19.545305] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:09.943 [2024-11-27 19:16:19.545405] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:09.943 [2024-11-27 19:16:19.545470] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:09.943 [2024-11-27 19:16:19.545482] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:18:09.943 19:16:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 85186 00:18:10.512 [2024-11-27 19:16:19.999372] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:11.451 19:16:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:18:11.451 00:18:11.451 real 0m27.028s 00:18:11.451 user 0m34.010s 00:18:11.451 sys 0m3.073s 00:18:11.451 ************************************ 00:18:11.451 END TEST raid5f_rebuild_test_sb 00:18:11.451 ************************************ 00:18:11.451 19:16:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:11.451 19:16:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:11.710 19:16:21 bdev_raid -- bdev/bdev_raid.sh@995 -- # base_blocklen=4096 00:18:11.710 19:16:21 bdev_raid -- bdev/bdev_raid.sh@997 -- # run_test raid_state_function_test_sb_4k raid_state_function_test raid1 2 true 00:18:11.710 19:16:21 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:18:11.710 19:16:21 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:11.710 19:16:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:11.710 ************************************ 00:18:11.710 START TEST raid_state_function_test_sb_4k 00:18:11.710 ************************************ 00:18:11.710 19:16:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:18:11.710 19:16:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:18:11.710 19:16:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:18:11.710 19:16:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:18:11.710 19:16:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:18:11.710 19:16:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:18:11.710 19:16:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:11.710 19:16:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:18:11.710 19:16:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:11.710 19:16:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:11.710 19:16:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:18:11.710 19:16:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:11.710 19:16:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:11.710 19:16:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:11.710 19:16:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:18:11.711 19:16:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:18:11.711 19:16:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # local strip_size 00:18:11.711 19:16:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:18:11.711 19:16:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:18:11.711 19:16:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:18:11.711 19:16:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:18:11.711 19:16:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:18:11.711 19:16:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:18:11.711 Process raid pid: 85997 00:18:11.711 19:16:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@229 -- # raid_pid=85997 00:18:11.711 19:16:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:18:11.711 19:16:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 85997' 00:18:11.711 19:16:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@231 -- # waitforlisten 85997 00:18:11.711 19:16:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@835 -- # '[' -z 85997 ']' 00:18:11.711 19:16:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:11.711 19:16:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:11.711 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:11.711 19:16:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:11.711 19:16:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:11.711 19:16:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:11.711 [2024-11-27 19:16:21.221507] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:18:11.711 [2024-11-27 19:16:21.221626] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:11.970 [2024-11-27 19:16:21.396058] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:11.970 [2024-11-27 19:16:21.497775] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:12.229 [2024-11-27 19:16:21.665861] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:12.229 [2024-11-27 19:16:21.665897] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:12.499 19:16:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:12.499 19:16:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@868 -- # return 0 00:18:12.499 19:16:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:12.499 19:16:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.499 19:16:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:12.499 [2024-11-27 19:16:22.047932] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:12.499 [2024-11-27 19:16:22.047990] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:12.500 [2024-11-27 19:16:22.048000] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:12.500 [2024-11-27 19:16:22.048009] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:12.500 19:16:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.500 19:16:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:12.500 19:16:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:12.500 19:16:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:12.500 19:16:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:12.500 19:16:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:12.500 19:16:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:12.500 19:16:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:12.500 19:16:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:12.500 19:16:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:12.500 19:16:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:12.500 19:16:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:12.500 19:16:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.500 19:16:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:12.500 19:16:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:12.500 19:16:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.500 19:16:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:12.500 "name": "Existed_Raid", 00:18:12.500 "uuid": "75c97773-d6dc-4643-a261-abb2334db124", 00:18:12.500 "strip_size_kb": 0, 00:18:12.500 "state": "configuring", 00:18:12.500 "raid_level": "raid1", 00:18:12.500 "superblock": true, 00:18:12.500 "num_base_bdevs": 2, 00:18:12.500 "num_base_bdevs_discovered": 0, 00:18:12.500 "num_base_bdevs_operational": 2, 00:18:12.500 "base_bdevs_list": [ 00:18:12.500 { 00:18:12.500 "name": "BaseBdev1", 00:18:12.500 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:12.500 "is_configured": false, 00:18:12.500 "data_offset": 0, 00:18:12.500 "data_size": 0 00:18:12.500 }, 00:18:12.500 { 00:18:12.500 "name": "BaseBdev2", 00:18:12.500 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:12.500 "is_configured": false, 00:18:12.500 "data_offset": 0, 00:18:12.500 "data_size": 0 00:18:12.500 } 00:18:12.500 ] 00:18:12.500 }' 00:18:12.500 19:16:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:12.500 19:16:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:13.068 19:16:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:13.068 19:16:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.068 19:16:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:13.068 [2024-11-27 19:16:22.499133] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:13.068 [2024-11-27 19:16:22.499207] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:18:13.068 19:16:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.068 19:16:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:13.068 19:16:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.068 19:16:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:13.068 [2024-11-27 19:16:22.511118] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:13.068 [2024-11-27 19:16:22.511193] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:13.068 [2024-11-27 19:16:22.511218] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:13.068 [2024-11-27 19:16:22.511241] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:13.068 19:16:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.068 19:16:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1 00:18:13.068 19:16:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.068 19:16:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:13.068 [2024-11-27 19:16:22.552833] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:13.068 BaseBdev1 00:18:13.068 19:16:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.068 19:16:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:18:13.068 19:16:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:18:13.068 19:16:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:13.068 19:16:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # local i 00:18:13.068 19:16:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:13.068 19:16:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:13.068 19:16:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:13.068 19:16:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.068 19:16:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:13.068 19:16:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.068 19:16:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:13.068 19:16:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.068 19:16:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:13.068 [ 00:18:13.068 { 00:18:13.068 "name": "BaseBdev1", 00:18:13.068 "aliases": [ 00:18:13.068 "3832d0ea-b15c-42e8-bd6c-a158a0f825ce" 00:18:13.068 ], 00:18:13.068 "product_name": "Malloc disk", 00:18:13.068 "block_size": 4096, 00:18:13.068 "num_blocks": 8192, 00:18:13.068 "uuid": "3832d0ea-b15c-42e8-bd6c-a158a0f825ce", 00:18:13.068 "assigned_rate_limits": { 00:18:13.068 "rw_ios_per_sec": 0, 00:18:13.068 "rw_mbytes_per_sec": 0, 00:18:13.068 "r_mbytes_per_sec": 0, 00:18:13.068 "w_mbytes_per_sec": 0 00:18:13.068 }, 00:18:13.068 "claimed": true, 00:18:13.068 "claim_type": "exclusive_write", 00:18:13.068 "zoned": false, 00:18:13.068 "supported_io_types": { 00:18:13.068 "read": true, 00:18:13.068 "write": true, 00:18:13.068 "unmap": true, 00:18:13.068 "flush": true, 00:18:13.068 "reset": true, 00:18:13.068 "nvme_admin": false, 00:18:13.068 "nvme_io": false, 00:18:13.068 "nvme_io_md": false, 00:18:13.068 "write_zeroes": true, 00:18:13.068 "zcopy": true, 00:18:13.068 "get_zone_info": false, 00:18:13.068 "zone_management": false, 00:18:13.068 "zone_append": false, 00:18:13.068 "compare": false, 00:18:13.068 "compare_and_write": false, 00:18:13.068 "abort": true, 00:18:13.068 "seek_hole": false, 00:18:13.068 "seek_data": false, 00:18:13.068 "copy": true, 00:18:13.068 "nvme_iov_md": false 00:18:13.068 }, 00:18:13.068 "memory_domains": [ 00:18:13.068 { 00:18:13.068 "dma_device_id": "system", 00:18:13.068 "dma_device_type": 1 00:18:13.068 }, 00:18:13.068 { 00:18:13.068 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:13.068 "dma_device_type": 2 00:18:13.068 } 00:18:13.068 ], 00:18:13.068 "driver_specific": {} 00:18:13.068 } 00:18:13.068 ] 00:18:13.068 19:16:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.068 19:16:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@911 -- # return 0 00:18:13.068 19:16:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:13.068 19:16:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:13.068 19:16:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:13.068 19:16:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:13.068 19:16:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:13.068 19:16:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:13.068 19:16:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:13.068 19:16:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:13.068 19:16:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:13.068 19:16:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:13.068 19:16:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:13.068 19:16:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:13.068 19:16:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.068 19:16:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:13.068 19:16:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.068 19:16:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:13.068 "name": "Existed_Raid", 00:18:13.068 "uuid": "66ab6171-1ac3-489d-9478-76e2903eaa30", 00:18:13.068 "strip_size_kb": 0, 00:18:13.069 "state": "configuring", 00:18:13.069 "raid_level": "raid1", 00:18:13.069 "superblock": true, 00:18:13.069 "num_base_bdevs": 2, 00:18:13.069 "num_base_bdevs_discovered": 1, 00:18:13.069 "num_base_bdevs_operational": 2, 00:18:13.069 "base_bdevs_list": [ 00:18:13.069 { 00:18:13.069 "name": "BaseBdev1", 00:18:13.069 "uuid": "3832d0ea-b15c-42e8-bd6c-a158a0f825ce", 00:18:13.069 "is_configured": true, 00:18:13.069 "data_offset": 256, 00:18:13.069 "data_size": 7936 00:18:13.069 }, 00:18:13.069 { 00:18:13.069 "name": "BaseBdev2", 00:18:13.069 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:13.069 "is_configured": false, 00:18:13.069 "data_offset": 0, 00:18:13.069 "data_size": 0 00:18:13.069 } 00:18:13.069 ] 00:18:13.069 }' 00:18:13.069 19:16:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:13.069 19:16:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:13.635 19:16:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:13.636 19:16:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.636 19:16:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:13.636 [2024-11-27 19:16:23.024052] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:13.636 [2024-11-27 19:16:23.024087] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:18:13.636 19:16:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.636 19:16:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:13.636 19:16:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.636 19:16:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:13.636 [2024-11-27 19:16:23.036078] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:13.636 [2024-11-27 19:16:23.037850] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:13.636 [2024-11-27 19:16:23.037891] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:13.636 19:16:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.636 19:16:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:18:13.636 19:16:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:13.636 19:16:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:13.636 19:16:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:13.636 19:16:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:13.636 19:16:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:13.636 19:16:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:13.636 19:16:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:13.636 19:16:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:13.636 19:16:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:13.636 19:16:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:13.636 19:16:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:13.636 19:16:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:13.636 19:16:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:13.636 19:16:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.636 19:16:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:13.636 19:16:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.636 19:16:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:13.636 "name": "Existed_Raid", 00:18:13.636 "uuid": "c4bf048f-66fa-4425-8248-8775a542baba", 00:18:13.636 "strip_size_kb": 0, 00:18:13.636 "state": "configuring", 00:18:13.636 "raid_level": "raid1", 00:18:13.636 "superblock": true, 00:18:13.636 "num_base_bdevs": 2, 00:18:13.636 "num_base_bdevs_discovered": 1, 00:18:13.636 "num_base_bdevs_operational": 2, 00:18:13.636 "base_bdevs_list": [ 00:18:13.636 { 00:18:13.636 "name": "BaseBdev1", 00:18:13.636 "uuid": "3832d0ea-b15c-42e8-bd6c-a158a0f825ce", 00:18:13.636 "is_configured": true, 00:18:13.636 "data_offset": 256, 00:18:13.636 "data_size": 7936 00:18:13.636 }, 00:18:13.636 { 00:18:13.636 "name": "BaseBdev2", 00:18:13.636 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:13.636 "is_configured": false, 00:18:13.636 "data_offset": 0, 00:18:13.636 "data_size": 0 00:18:13.636 } 00:18:13.636 ] 00:18:13.636 }' 00:18:13.636 19:16:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:13.636 19:16:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:13.895 19:16:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2 00:18:13.895 19:16:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.895 19:16:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:13.895 [2024-11-27 19:16:23.507859] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:13.895 [2024-11-27 19:16:23.508199] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:13.895 [2024-11-27 19:16:23.508251] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:13.895 [2024-11-27 19:16:23.508516] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:18:13.895 [2024-11-27 19:16:23.508721] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:13.895 [2024-11-27 19:16:23.508768] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:18:13.895 BaseBdev2 00:18:13.895 [2024-11-27 19:16:23.508957] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:13.895 19:16:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.895 19:16:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:18:13.895 19:16:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:18:13.895 19:16:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:13.895 19:16:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # local i 00:18:13.895 19:16:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:13.895 19:16:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:13.895 19:16:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:13.895 19:16:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.895 19:16:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:13.895 19:16:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.895 19:16:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:13.895 19:16:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.895 19:16:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:14.155 [ 00:18:14.155 { 00:18:14.155 "name": "BaseBdev2", 00:18:14.155 "aliases": [ 00:18:14.155 "3bfae491-45f2-47a2-b053-32d6486800c8" 00:18:14.155 ], 00:18:14.155 "product_name": "Malloc disk", 00:18:14.155 "block_size": 4096, 00:18:14.155 "num_blocks": 8192, 00:18:14.155 "uuid": "3bfae491-45f2-47a2-b053-32d6486800c8", 00:18:14.155 "assigned_rate_limits": { 00:18:14.155 "rw_ios_per_sec": 0, 00:18:14.155 "rw_mbytes_per_sec": 0, 00:18:14.155 "r_mbytes_per_sec": 0, 00:18:14.155 "w_mbytes_per_sec": 0 00:18:14.155 }, 00:18:14.155 "claimed": true, 00:18:14.155 "claim_type": "exclusive_write", 00:18:14.155 "zoned": false, 00:18:14.155 "supported_io_types": { 00:18:14.155 "read": true, 00:18:14.155 "write": true, 00:18:14.155 "unmap": true, 00:18:14.155 "flush": true, 00:18:14.155 "reset": true, 00:18:14.155 "nvme_admin": false, 00:18:14.155 "nvme_io": false, 00:18:14.155 "nvme_io_md": false, 00:18:14.155 "write_zeroes": true, 00:18:14.155 "zcopy": true, 00:18:14.155 "get_zone_info": false, 00:18:14.155 "zone_management": false, 00:18:14.155 "zone_append": false, 00:18:14.155 "compare": false, 00:18:14.155 "compare_and_write": false, 00:18:14.155 "abort": true, 00:18:14.155 "seek_hole": false, 00:18:14.155 "seek_data": false, 00:18:14.155 "copy": true, 00:18:14.155 "nvme_iov_md": false 00:18:14.155 }, 00:18:14.155 "memory_domains": [ 00:18:14.155 { 00:18:14.155 "dma_device_id": "system", 00:18:14.155 "dma_device_type": 1 00:18:14.155 }, 00:18:14.155 { 00:18:14.155 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:14.155 "dma_device_type": 2 00:18:14.155 } 00:18:14.155 ], 00:18:14.155 "driver_specific": {} 00:18:14.155 } 00:18:14.155 ] 00:18:14.155 19:16:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.155 19:16:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@911 -- # return 0 00:18:14.155 19:16:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:14.155 19:16:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:14.155 19:16:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:18:14.155 19:16:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:14.155 19:16:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:14.155 19:16:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:14.155 19:16:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:14.155 19:16:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:14.155 19:16:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:14.155 19:16:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:14.155 19:16:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:14.155 19:16:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:14.155 19:16:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:14.155 19:16:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:14.155 19:16:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.155 19:16:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:14.155 19:16:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.155 19:16:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:14.155 "name": "Existed_Raid", 00:18:14.155 "uuid": "c4bf048f-66fa-4425-8248-8775a542baba", 00:18:14.155 "strip_size_kb": 0, 00:18:14.155 "state": "online", 00:18:14.155 "raid_level": "raid1", 00:18:14.155 "superblock": true, 00:18:14.155 "num_base_bdevs": 2, 00:18:14.155 "num_base_bdevs_discovered": 2, 00:18:14.155 "num_base_bdevs_operational": 2, 00:18:14.155 "base_bdevs_list": [ 00:18:14.155 { 00:18:14.155 "name": "BaseBdev1", 00:18:14.155 "uuid": "3832d0ea-b15c-42e8-bd6c-a158a0f825ce", 00:18:14.155 "is_configured": true, 00:18:14.155 "data_offset": 256, 00:18:14.155 "data_size": 7936 00:18:14.155 }, 00:18:14.155 { 00:18:14.155 "name": "BaseBdev2", 00:18:14.155 "uuid": "3bfae491-45f2-47a2-b053-32d6486800c8", 00:18:14.155 "is_configured": true, 00:18:14.155 "data_offset": 256, 00:18:14.155 "data_size": 7936 00:18:14.155 } 00:18:14.155 ] 00:18:14.155 }' 00:18:14.155 19:16:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:14.155 19:16:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:14.415 19:16:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:18:14.415 19:16:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:18:14.415 19:16:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:14.415 19:16:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:14.415 19:16:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local name 00:18:14.415 19:16:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:14.415 19:16:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:14.415 19:16:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:18:14.415 19:16:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.415 19:16:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:14.415 [2024-11-27 19:16:23.983554] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:14.415 19:16:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.415 19:16:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:14.415 "name": "Existed_Raid", 00:18:14.415 "aliases": [ 00:18:14.415 "c4bf048f-66fa-4425-8248-8775a542baba" 00:18:14.415 ], 00:18:14.415 "product_name": "Raid Volume", 00:18:14.415 "block_size": 4096, 00:18:14.415 "num_blocks": 7936, 00:18:14.415 "uuid": "c4bf048f-66fa-4425-8248-8775a542baba", 00:18:14.415 "assigned_rate_limits": { 00:18:14.415 "rw_ios_per_sec": 0, 00:18:14.415 "rw_mbytes_per_sec": 0, 00:18:14.415 "r_mbytes_per_sec": 0, 00:18:14.415 "w_mbytes_per_sec": 0 00:18:14.415 }, 00:18:14.415 "claimed": false, 00:18:14.415 "zoned": false, 00:18:14.415 "supported_io_types": { 00:18:14.415 "read": true, 00:18:14.415 "write": true, 00:18:14.415 "unmap": false, 00:18:14.415 "flush": false, 00:18:14.415 "reset": true, 00:18:14.415 "nvme_admin": false, 00:18:14.415 "nvme_io": false, 00:18:14.415 "nvme_io_md": false, 00:18:14.415 "write_zeroes": true, 00:18:14.415 "zcopy": false, 00:18:14.415 "get_zone_info": false, 00:18:14.415 "zone_management": false, 00:18:14.415 "zone_append": false, 00:18:14.415 "compare": false, 00:18:14.415 "compare_and_write": false, 00:18:14.415 "abort": false, 00:18:14.415 "seek_hole": false, 00:18:14.415 "seek_data": false, 00:18:14.415 "copy": false, 00:18:14.415 "nvme_iov_md": false 00:18:14.415 }, 00:18:14.415 "memory_domains": [ 00:18:14.415 { 00:18:14.415 "dma_device_id": "system", 00:18:14.415 "dma_device_type": 1 00:18:14.415 }, 00:18:14.415 { 00:18:14.415 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:14.415 "dma_device_type": 2 00:18:14.415 }, 00:18:14.415 { 00:18:14.415 "dma_device_id": "system", 00:18:14.415 "dma_device_type": 1 00:18:14.415 }, 00:18:14.415 { 00:18:14.415 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:14.415 "dma_device_type": 2 00:18:14.415 } 00:18:14.415 ], 00:18:14.415 "driver_specific": { 00:18:14.415 "raid": { 00:18:14.415 "uuid": "c4bf048f-66fa-4425-8248-8775a542baba", 00:18:14.415 "strip_size_kb": 0, 00:18:14.415 "state": "online", 00:18:14.415 "raid_level": "raid1", 00:18:14.415 "superblock": true, 00:18:14.415 "num_base_bdevs": 2, 00:18:14.415 "num_base_bdevs_discovered": 2, 00:18:14.415 "num_base_bdevs_operational": 2, 00:18:14.415 "base_bdevs_list": [ 00:18:14.415 { 00:18:14.415 "name": "BaseBdev1", 00:18:14.415 "uuid": "3832d0ea-b15c-42e8-bd6c-a158a0f825ce", 00:18:14.415 "is_configured": true, 00:18:14.415 "data_offset": 256, 00:18:14.415 "data_size": 7936 00:18:14.415 }, 00:18:14.415 { 00:18:14.415 "name": "BaseBdev2", 00:18:14.415 "uuid": "3bfae491-45f2-47a2-b053-32d6486800c8", 00:18:14.415 "is_configured": true, 00:18:14.415 "data_offset": 256, 00:18:14.415 "data_size": 7936 00:18:14.415 } 00:18:14.415 ] 00:18:14.415 } 00:18:14.415 } 00:18:14.415 }' 00:18:14.415 19:16:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:14.674 19:16:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:18:14.674 BaseBdev2' 00:18:14.674 19:16:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:14.674 19:16:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:18:14.674 19:16:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:14.674 19:16:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:18:14.674 19:16:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.674 19:16:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:14.674 19:16:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:14.675 19:16:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.675 19:16:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:18:14.675 19:16:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:18:14.675 19:16:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:14.675 19:16:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:18:14.675 19:16:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.675 19:16:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:14.675 19:16:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:14.675 19:16:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.675 19:16:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:18:14.675 19:16:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:18:14.675 19:16:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:18:14.675 19:16:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.675 19:16:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:14.675 [2024-11-27 19:16:24.214956] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:14.675 19:16:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.675 19:16:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@260 -- # local expected_state 00:18:14.675 19:16:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:18:14.675 19:16:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:14.675 19:16:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:18:14.675 19:16:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:18:14.675 19:16:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:18:14.675 19:16:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:14.675 19:16:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:14.675 19:16:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:14.675 19:16:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:14.675 19:16:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:14.675 19:16:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:14.934 19:16:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:14.935 19:16:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:14.935 19:16:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:14.935 19:16:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:14.935 19:16:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:14.935 19:16:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.935 19:16:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:14.935 19:16:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.935 19:16:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:14.935 "name": "Existed_Raid", 00:18:14.935 "uuid": "c4bf048f-66fa-4425-8248-8775a542baba", 00:18:14.935 "strip_size_kb": 0, 00:18:14.935 "state": "online", 00:18:14.935 "raid_level": "raid1", 00:18:14.935 "superblock": true, 00:18:14.935 "num_base_bdevs": 2, 00:18:14.935 "num_base_bdevs_discovered": 1, 00:18:14.935 "num_base_bdevs_operational": 1, 00:18:14.935 "base_bdevs_list": [ 00:18:14.935 { 00:18:14.935 "name": null, 00:18:14.935 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:14.935 "is_configured": false, 00:18:14.935 "data_offset": 0, 00:18:14.935 "data_size": 7936 00:18:14.935 }, 00:18:14.935 { 00:18:14.935 "name": "BaseBdev2", 00:18:14.935 "uuid": "3bfae491-45f2-47a2-b053-32d6486800c8", 00:18:14.935 "is_configured": true, 00:18:14.935 "data_offset": 256, 00:18:14.935 "data_size": 7936 00:18:14.935 } 00:18:14.935 ] 00:18:14.935 }' 00:18:14.935 19:16:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:14.935 19:16:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:15.195 19:16:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:18:15.195 19:16:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:15.195 19:16:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:15.195 19:16:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:15.195 19:16:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.195 19:16:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:15.195 19:16:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.195 19:16:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:15.195 19:16:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:15.195 19:16:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:18:15.195 19:16:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.195 19:16:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:15.195 [2024-11-27 19:16:24.796028] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:15.195 [2024-11-27 19:16:24.796169] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:15.455 [2024-11-27 19:16:24.884469] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:15.455 [2024-11-27 19:16:24.884578] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:15.455 [2024-11-27 19:16:24.884616] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:18:15.455 19:16:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.455 19:16:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:15.455 19:16:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:15.455 19:16:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:15.455 19:16:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.455 19:16:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:15.455 19:16:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:18:15.455 19:16:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.455 19:16:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:18:15.455 19:16:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:18:15.455 19:16:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:18:15.455 19:16:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@326 -- # killprocess 85997 00:18:15.455 19:16:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@954 -- # '[' -z 85997 ']' 00:18:15.455 19:16:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@958 -- # kill -0 85997 00:18:15.455 19:16:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@959 -- # uname 00:18:15.455 19:16:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:15.455 19:16:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85997 00:18:15.455 19:16:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:15.455 19:16:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:15.455 19:16:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85997' 00:18:15.455 killing process with pid 85997 00:18:15.455 19:16:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@973 -- # kill 85997 00:18:15.455 [2024-11-27 19:16:24.985953] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:15.455 19:16:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@978 -- # wait 85997 00:18:15.455 [2024-11-27 19:16:25.002212] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:16.839 19:16:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@328 -- # return 0 00:18:16.839 00:18:16.839 real 0m4.938s 00:18:16.839 user 0m7.080s 00:18:16.839 sys 0m0.911s 00:18:16.839 19:16:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:16.839 19:16:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:16.839 ************************************ 00:18:16.839 END TEST raid_state_function_test_sb_4k 00:18:16.839 ************************************ 00:18:16.839 19:16:26 bdev_raid -- bdev/bdev_raid.sh@998 -- # run_test raid_superblock_test_4k raid_superblock_test raid1 2 00:18:16.839 19:16:26 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:18:16.839 19:16:26 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:16.839 19:16:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:16.839 ************************************ 00:18:16.839 START TEST raid_superblock_test_4k 00:18:16.839 ************************************ 00:18:16.839 19:16:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:18:16.839 19:16:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:18:16.839 19:16:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:18:16.839 19:16:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:18:16.839 19:16:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:18:16.839 19:16:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:18:16.839 19:16:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:18:16.839 19:16:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:18:16.839 19:16:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:18:16.839 19:16:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:18:16.839 19:16:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@399 -- # local strip_size 00:18:16.839 19:16:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:18:16.839 19:16:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:18:16.839 19:16:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:18:16.839 19:16:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:18:16.839 19:16:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:18:16.839 19:16:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # raid_pid=86244 00:18:16.839 19:16:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@413 -- # waitforlisten 86244 00:18:16.839 19:16:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:18:16.839 19:16:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@835 -- # '[' -z 86244 ']' 00:18:16.839 19:16:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:16.839 19:16:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:16.839 19:16:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:16.839 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:16.839 19:16:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:16.839 19:16:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:16.839 [2024-11-27 19:16:26.246114] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:18:16.839 [2024-11-27 19:16:26.246350] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86244 ] 00:18:16.839 [2024-11-27 19:16:26.426057] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:17.100 [2024-11-27 19:16:26.533568] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:17.100 [2024-11-27 19:16:26.711649] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:17.100 [2024-11-27 19:16:26.711749] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:17.671 19:16:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:17.671 19:16:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@868 -- # return 0 00:18:17.671 19:16:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:18:17.671 19:16:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:17.671 19:16:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:18:17.671 19:16:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:18:17.671 19:16:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:18:17.671 19:16:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:17.671 19:16:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:17.671 19:16:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:17.671 19:16:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc1 00:18:17.671 19:16:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.671 19:16:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:17.671 malloc1 00:18:17.671 19:16:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.671 19:16:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:17.671 19:16:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.671 19:16:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:17.671 [2024-11-27 19:16:27.102906] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:17.671 [2024-11-27 19:16:27.103002] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:17.671 [2024-11-27 19:16:27.103054] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:17.671 [2024-11-27 19:16:27.103082] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:17.671 [2024-11-27 19:16:27.105085] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:17.671 [2024-11-27 19:16:27.105157] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:17.671 pt1 00:18:17.671 19:16:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.671 19:16:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:17.671 19:16:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:17.671 19:16:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:18:17.671 19:16:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:18:17.671 19:16:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:18:17.671 19:16:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:17.671 19:16:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:17.671 19:16:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:17.671 19:16:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc2 00:18:17.671 19:16:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.671 19:16:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:17.671 malloc2 00:18:17.671 19:16:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.671 19:16:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:17.671 19:16:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.671 19:16:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:17.671 [2024-11-27 19:16:27.160068] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:17.671 [2024-11-27 19:16:27.160120] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:17.671 [2024-11-27 19:16:27.160144] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:17.671 [2024-11-27 19:16:27.160152] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:17.671 [2024-11-27 19:16:27.162148] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:17.671 [2024-11-27 19:16:27.162184] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:17.671 pt2 00:18:17.671 19:16:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.671 19:16:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:17.671 19:16:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:17.671 19:16:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:18:17.671 19:16:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.671 19:16:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:17.671 [2024-11-27 19:16:27.172089] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:17.671 [2024-11-27 19:16:27.173876] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:17.671 [2024-11-27 19:16:27.174073] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:17.671 [2024-11-27 19:16:27.174121] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:17.671 [2024-11-27 19:16:27.174381] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:18:17.671 [2024-11-27 19:16:27.174575] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:17.671 [2024-11-27 19:16:27.174621] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:17.671 [2024-11-27 19:16:27.174821] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:17.671 19:16:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.671 19:16:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:17.671 19:16:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:17.671 19:16:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:17.671 19:16:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:17.671 19:16:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:17.671 19:16:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:17.671 19:16:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:17.671 19:16:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:17.671 19:16:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:17.671 19:16:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:17.671 19:16:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:17.671 19:16:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:17.671 19:16:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.671 19:16:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:17.671 19:16:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.671 19:16:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:17.671 "name": "raid_bdev1", 00:18:17.671 "uuid": "ead3bc77-c682-424b-bcc1-6da97d0f48bf", 00:18:17.671 "strip_size_kb": 0, 00:18:17.671 "state": "online", 00:18:17.671 "raid_level": "raid1", 00:18:17.671 "superblock": true, 00:18:17.671 "num_base_bdevs": 2, 00:18:17.671 "num_base_bdevs_discovered": 2, 00:18:17.671 "num_base_bdevs_operational": 2, 00:18:17.671 "base_bdevs_list": [ 00:18:17.671 { 00:18:17.671 "name": "pt1", 00:18:17.671 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:17.671 "is_configured": true, 00:18:17.671 "data_offset": 256, 00:18:17.671 "data_size": 7936 00:18:17.671 }, 00:18:17.671 { 00:18:17.671 "name": "pt2", 00:18:17.671 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:17.671 "is_configured": true, 00:18:17.671 "data_offset": 256, 00:18:17.671 "data_size": 7936 00:18:17.671 } 00:18:17.671 ] 00:18:17.671 }' 00:18:17.671 19:16:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:17.671 19:16:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:18.238 19:16:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:18:18.238 19:16:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:18.238 19:16:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:18.238 19:16:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:18.238 19:16:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:18:18.238 19:16:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:18.238 19:16:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:18.238 19:16:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.238 19:16:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:18.238 19:16:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:18.238 [2024-11-27 19:16:27.683865] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:18.238 19:16:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.238 19:16:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:18.238 "name": "raid_bdev1", 00:18:18.238 "aliases": [ 00:18:18.238 "ead3bc77-c682-424b-bcc1-6da97d0f48bf" 00:18:18.238 ], 00:18:18.238 "product_name": "Raid Volume", 00:18:18.238 "block_size": 4096, 00:18:18.238 "num_blocks": 7936, 00:18:18.238 "uuid": "ead3bc77-c682-424b-bcc1-6da97d0f48bf", 00:18:18.238 "assigned_rate_limits": { 00:18:18.238 "rw_ios_per_sec": 0, 00:18:18.238 "rw_mbytes_per_sec": 0, 00:18:18.238 "r_mbytes_per_sec": 0, 00:18:18.238 "w_mbytes_per_sec": 0 00:18:18.238 }, 00:18:18.238 "claimed": false, 00:18:18.238 "zoned": false, 00:18:18.238 "supported_io_types": { 00:18:18.238 "read": true, 00:18:18.238 "write": true, 00:18:18.238 "unmap": false, 00:18:18.238 "flush": false, 00:18:18.238 "reset": true, 00:18:18.238 "nvme_admin": false, 00:18:18.238 "nvme_io": false, 00:18:18.238 "nvme_io_md": false, 00:18:18.238 "write_zeroes": true, 00:18:18.238 "zcopy": false, 00:18:18.238 "get_zone_info": false, 00:18:18.238 "zone_management": false, 00:18:18.238 "zone_append": false, 00:18:18.238 "compare": false, 00:18:18.238 "compare_and_write": false, 00:18:18.238 "abort": false, 00:18:18.238 "seek_hole": false, 00:18:18.238 "seek_data": false, 00:18:18.238 "copy": false, 00:18:18.238 "nvme_iov_md": false 00:18:18.238 }, 00:18:18.238 "memory_domains": [ 00:18:18.238 { 00:18:18.238 "dma_device_id": "system", 00:18:18.238 "dma_device_type": 1 00:18:18.238 }, 00:18:18.238 { 00:18:18.238 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:18.238 "dma_device_type": 2 00:18:18.238 }, 00:18:18.238 { 00:18:18.238 "dma_device_id": "system", 00:18:18.238 "dma_device_type": 1 00:18:18.238 }, 00:18:18.238 { 00:18:18.238 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:18.238 "dma_device_type": 2 00:18:18.238 } 00:18:18.238 ], 00:18:18.238 "driver_specific": { 00:18:18.238 "raid": { 00:18:18.238 "uuid": "ead3bc77-c682-424b-bcc1-6da97d0f48bf", 00:18:18.238 "strip_size_kb": 0, 00:18:18.238 "state": "online", 00:18:18.238 "raid_level": "raid1", 00:18:18.238 "superblock": true, 00:18:18.238 "num_base_bdevs": 2, 00:18:18.238 "num_base_bdevs_discovered": 2, 00:18:18.238 "num_base_bdevs_operational": 2, 00:18:18.238 "base_bdevs_list": [ 00:18:18.238 { 00:18:18.238 "name": "pt1", 00:18:18.238 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:18.238 "is_configured": true, 00:18:18.238 "data_offset": 256, 00:18:18.238 "data_size": 7936 00:18:18.238 }, 00:18:18.238 { 00:18:18.238 "name": "pt2", 00:18:18.238 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:18.238 "is_configured": true, 00:18:18.238 "data_offset": 256, 00:18:18.238 "data_size": 7936 00:18:18.238 } 00:18:18.238 ] 00:18:18.238 } 00:18:18.238 } 00:18:18.238 }' 00:18:18.238 19:16:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:18.238 19:16:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:18.238 pt2' 00:18:18.238 19:16:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:18.238 19:16:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:18:18.238 19:16:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:18.238 19:16:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:18.238 19:16:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:18.238 19:16:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.238 19:16:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:18.238 19:16:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.498 19:16:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:18:18.498 19:16:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:18:18.498 19:16:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:18.498 19:16:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:18.498 19:16:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:18.498 19:16:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.498 19:16:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:18.498 19:16:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.498 19:16:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:18:18.498 19:16:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:18:18.499 19:16:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:18:18.499 19:16:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:18.499 19:16:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.499 19:16:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:18.499 [2024-11-27 19:16:27.943375] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:18.499 19:16:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.499 19:16:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=ead3bc77-c682-424b-bcc1-6da97d0f48bf 00:18:18.499 19:16:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@436 -- # '[' -z ead3bc77-c682-424b-bcc1-6da97d0f48bf ']' 00:18:18.499 19:16:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:18.499 19:16:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.499 19:16:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:18.499 [2024-11-27 19:16:27.987052] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:18.499 [2024-11-27 19:16:27.987110] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:18.499 [2024-11-27 19:16:27.987194] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:18.499 [2024-11-27 19:16:27.987258] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:18.499 [2024-11-27 19:16:27.987291] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:18.499 19:16:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.499 19:16:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:18.499 19:16:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:18:18.499 19:16:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.499 19:16:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:18.499 19:16:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.499 19:16:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:18:18.499 19:16:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:18:18.499 19:16:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:18.499 19:16:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:18:18.499 19:16:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.499 19:16:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:18.499 19:16:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.499 19:16:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:18.499 19:16:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:18:18.499 19:16:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.499 19:16:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:18.499 19:16:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.499 19:16:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:18:18.499 19:16:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:18:18.499 19:16:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.499 19:16:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:18.499 19:16:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.499 19:16:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:18:18.499 19:16:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:18.499 19:16:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@652 -- # local es=0 00:18:18.499 19:16:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:18.499 19:16:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:18.499 19:16:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:18.499 19:16:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:18.499 19:16:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:18.499 19:16:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:18.499 19:16:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.499 19:16:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:18.499 [2024-11-27 19:16:28.126834] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:18:18.499 [2024-11-27 19:16:28.128908] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:18:18.499 [2024-11-27 19:16:28.128975] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:18:18.499 [2024-11-27 19:16:28.129034] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:18:18.499 [2024-11-27 19:16:28.129051] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:18.499 [2024-11-27 19:16:28.129063] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:18:18.760 request: 00:18:18.760 { 00:18:18.760 "name": "raid_bdev1", 00:18:18.760 "raid_level": "raid1", 00:18:18.760 "base_bdevs": [ 00:18:18.760 "malloc1", 00:18:18.760 "malloc2" 00:18:18.760 ], 00:18:18.760 "superblock": false, 00:18:18.760 "method": "bdev_raid_create", 00:18:18.760 "req_id": 1 00:18:18.760 } 00:18:18.760 Got JSON-RPC error response 00:18:18.760 response: 00:18:18.760 { 00:18:18.760 "code": -17, 00:18:18.760 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:18:18.760 } 00:18:18.760 19:16:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:18.760 19:16:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@655 -- # es=1 00:18:18.760 19:16:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:18.760 19:16:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:18.760 19:16:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:18.760 19:16:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:18.760 19:16:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:18:18.760 19:16:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.760 19:16:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:18.760 19:16:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.760 19:16:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:18:18.760 19:16:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:18:18.760 19:16:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:18.760 19:16:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.760 19:16:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:18.760 [2024-11-27 19:16:28.190796] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:18.760 [2024-11-27 19:16:28.190883] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:18.760 [2024-11-27 19:16:28.190916] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:18:18.760 [2024-11-27 19:16:28.190945] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:18.760 [2024-11-27 19:16:28.193002] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:18.760 [2024-11-27 19:16:28.193075] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:18.760 [2024-11-27 19:16:28.193162] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:18.760 [2024-11-27 19:16:28.193229] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:18.760 pt1 00:18:18.760 19:16:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.760 19:16:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:18:18.760 19:16:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:18.760 19:16:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:18.760 19:16:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:18.760 19:16:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:18.760 19:16:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:18.760 19:16:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:18.760 19:16:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:18.760 19:16:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:18.760 19:16:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:18.760 19:16:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:18.760 19:16:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:18.760 19:16:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.760 19:16:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:18.760 19:16:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.760 19:16:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:18.760 "name": "raid_bdev1", 00:18:18.760 "uuid": "ead3bc77-c682-424b-bcc1-6da97d0f48bf", 00:18:18.760 "strip_size_kb": 0, 00:18:18.760 "state": "configuring", 00:18:18.760 "raid_level": "raid1", 00:18:18.760 "superblock": true, 00:18:18.760 "num_base_bdevs": 2, 00:18:18.760 "num_base_bdevs_discovered": 1, 00:18:18.760 "num_base_bdevs_operational": 2, 00:18:18.760 "base_bdevs_list": [ 00:18:18.760 { 00:18:18.760 "name": "pt1", 00:18:18.760 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:18.760 "is_configured": true, 00:18:18.760 "data_offset": 256, 00:18:18.760 "data_size": 7936 00:18:18.760 }, 00:18:18.760 { 00:18:18.760 "name": null, 00:18:18.760 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:18.760 "is_configured": false, 00:18:18.760 "data_offset": 256, 00:18:18.760 "data_size": 7936 00:18:18.760 } 00:18:18.760 ] 00:18:18.760 }' 00:18:18.760 19:16:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:18.760 19:16:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:19.021 19:16:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:18:19.021 19:16:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:18:19.021 19:16:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:19.022 19:16:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:19.022 19:16:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.022 19:16:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:19.022 [2024-11-27 19:16:28.626068] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:19.022 [2024-11-27 19:16:28.626162] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:19.022 [2024-11-27 19:16:28.626184] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:18:19.022 [2024-11-27 19:16:28.626194] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:19.022 [2024-11-27 19:16:28.626539] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:19.022 [2024-11-27 19:16:28.626557] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:19.022 [2024-11-27 19:16:28.626609] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:19.022 [2024-11-27 19:16:28.626629] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:19.022 [2024-11-27 19:16:28.626750] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:19.022 [2024-11-27 19:16:28.626761] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:19.022 [2024-11-27 19:16:28.626984] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:18:19.022 [2024-11-27 19:16:28.627126] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:19.022 [2024-11-27 19:16:28.627134] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:18:19.022 [2024-11-27 19:16:28.627252] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:19.022 pt2 00:18:19.022 19:16:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.022 19:16:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:18:19.022 19:16:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:19.022 19:16:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:19.022 19:16:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:19.022 19:16:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:19.022 19:16:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:19.022 19:16:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:19.022 19:16:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:19.022 19:16:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:19.022 19:16:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:19.022 19:16:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:19.022 19:16:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:19.022 19:16:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:19.022 19:16:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:19.022 19:16:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.022 19:16:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:19.282 19:16:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.282 19:16:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:19.282 "name": "raid_bdev1", 00:18:19.282 "uuid": "ead3bc77-c682-424b-bcc1-6da97d0f48bf", 00:18:19.282 "strip_size_kb": 0, 00:18:19.282 "state": "online", 00:18:19.282 "raid_level": "raid1", 00:18:19.282 "superblock": true, 00:18:19.282 "num_base_bdevs": 2, 00:18:19.282 "num_base_bdevs_discovered": 2, 00:18:19.282 "num_base_bdevs_operational": 2, 00:18:19.282 "base_bdevs_list": [ 00:18:19.282 { 00:18:19.282 "name": "pt1", 00:18:19.282 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:19.282 "is_configured": true, 00:18:19.282 "data_offset": 256, 00:18:19.282 "data_size": 7936 00:18:19.282 }, 00:18:19.282 { 00:18:19.282 "name": "pt2", 00:18:19.282 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:19.282 "is_configured": true, 00:18:19.282 "data_offset": 256, 00:18:19.282 "data_size": 7936 00:18:19.282 } 00:18:19.282 ] 00:18:19.282 }' 00:18:19.282 19:16:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:19.282 19:16:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:19.542 19:16:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:18:19.542 19:16:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:19.542 19:16:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:19.542 19:16:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:19.542 19:16:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:18:19.542 19:16:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:19.542 19:16:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:19.542 19:16:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:19.542 19:16:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.542 19:16:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:19.542 [2024-11-27 19:16:29.081469] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:19.542 19:16:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.542 19:16:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:19.542 "name": "raid_bdev1", 00:18:19.542 "aliases": [ 00:18:19.542 "ead3bc77-c682-424b-bcc1-6da97d0f48bf" 00:18:19.542 ], 00:18:19.542 "product_name": "Raid Volume", 00:18:19.542 "block_size": 4096, 00:18:19.542 "num_blocks": 7936, 00:18:19.542 "uuid": "ead3bc77-c682-424b-bcc1-6da97d0f48bf", 00:18:19.542 "assigned_rate_limits": { 00:18:19.542 "rw_ios_per_sec": 0, 00:18:19.542 "rw_mbytes_per_sec": 0, 00:18:19.542 "r_mbytes_per_sec": 0, 00:18:19.542 "w_mbytes_per_sec": 0 00:18:19.542 }, 00:18:19.542 "claimed": false, 00:18:19.542 "zoned": false, 00:18:19.542 "supported_io_types": { 00:18:19.542 "read": true, 00:18:19.542 "write": true, 00:18:19.542 "unmap": false, 00:18:19.542 "flush": false, 00:18:19.542 "reset": true, 00:18:19.542 "nvme_admin": false, 00:18:19.542 "nvme_io": false, 00:18:19.542 "nvme_io_md": false, 00:18:19.542 "write_zeroes": true, 00:18:19.542 "zcopy": false, 00:18:19.542 "get_zone_info": false, 00:18:19.542 "zone_management": false, 00:18:19.542 "zone_append": false, 00:18:19.542 "compare": false, 00:18:19.542 "compare_and_write": false, 00:18:19.542 "abort": false, 00:18:19.542 "seek_hole": false, 00:18:19.542 "seek_data": false, 00:18:19.542 "copy": false, 00:18:19.542 "nvme_iov_md": false 00:18:19.542 }, 00:18:19.542 "memory_domains": [ 00:18:19.542 { 00:18:19.542 "dma_device_id": "system", 00:18:19.542 "dma_device_type": 1 00:18:19.542 }, 00:18:19.542 { 00:18:19.542 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:19.542 "dma_device_type": 2 00:18:19.542 }, 00:18:19.542 { 00:18:19.542 "dma_device_id": "system", 00:18:19.542 "dma_device_type": 1 00:18:19.542 }, 00:18:19.542 { 00:18:19.542 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:19.542 "dma_device_type": 2 00:18:19.542 } 00:18:19.542 ], 00:18:19.542 "driver_specific": { 00:18:19.542 "raid": { 00:18:19.542 "uuid": "ead3bc77-c682-424b-bcc1-6da97d0f48bf", 00:18:19.542 "strip_size_kb": 0, 00:18:19.542 "state": "online", 00:18:19.542 "raid_level": "raid1", 00:18:19.542 "superblock": true, 00:18:19.542 "num_base_bdevs": 2, 00:18:19.542 "num_base_bdevs_discovered": 2, 00:18:19.542 "num_base_bdevs_operational": 2, 00:18:19.542 "base_bdevs_list": [ 00:18:19.542 { 00:18:19.542 "name": "pt1", 00:18:19.542 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:19.542 "is_configured": true, 00:18:19.542 "data_offset": 256, 00:18:19.542 "data_size": 7936 00:18:19.542 }, 00:18:19.542 { 00:18:19.542 "name": "pt2", 00:18:19.542 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:19.542 "is_configured": true, 00:18:19.542 "data_offset": 256, 00:18:19.542 "data_size": 7936 00:18:19.542 } 00:18:19.542 ] 00:18:19.542 } 00:18:19.542 } 00:18:19.542 }' 00:18:19.542 19:16:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:19.542 19:16:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:19.542 pt2' 00:18:19.542 19:16:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:19.801 19:16:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:18:19.801 19:16:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:19.801 19:16:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:19.801 19:16:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:19.801 19:16:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.801 19:16:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:19.801 19:16:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.801 19:16:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:18:19.801 19:16:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:18:19.801 19:16:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:19.801 19:16:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:19.801 19:16:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:19.801 19:16:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.801 19:16:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:19.801 19:16:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.801 19:16:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:18:19.801 19:16:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:18:19.801 19:16:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:19.801 19:16:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:18:19.801 19:16:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.801 19:16:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:19.801 [2024-11-27 19:16:29.297090] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:19.801 19:16:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.801 19:16:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # '[' ead3bc77-c682-424b-bcc1-6da97d0f48bf '!=' ead3bc77-c682-424b-bcc1-6da97d0f48bf ']' 00:18:19.801 19:16:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:18:19.801 19:16:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:19.801 19:16:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:18:19.801 19:16:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:18:19.801 19:16:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.801 19:16:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:19.802 [2024-11-27 19:16:29.344830] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:18:19.802 19:16:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.802 19:16:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:19.802 19:16:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:19.802 19:16:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:19.802 19:16:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:19.802 19:16:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:19.802 19:16:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:19.802 19:16:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:19.802 19:16:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:19.802 19:16:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:19.802 19:16:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:19.802 19:16:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:19.802 19:16:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:19.802 19:16:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.802 19:16:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:19.802 19:16:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.802 19:16:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:19.802 "name": "raid_bdev1", 00:18:19.802 "uuid": "ead3bc77-c682-424b-bcc1-6da97d0f48bf", 00:18:19.802 "strip_size_kb": 0, 00:18:19.802 "state": "online", 00:18:19.802 "raid_level": "raid1", 00:18:19.802 "superblock": true, 00:18:19.802 "num_base_bdevs": 2, 00:18:19.802 "num_base_bdevs_discovered": 1, 00:18:19.802 "num_base_bdevs_operational": 1, 00:18:19.802 "base_bdevs_list": [ 00:18:19.802 { 00:18:19.802 "name": null, 00:18:19.802 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:19.802 "is_configured": false, 00:18:19.802 "data_offset": 0, 00:18:19.802 "data_size": 7936 00:18:19.802 }, 00:18:19.802 { 00:18:19.802 "name": "pt2", 00:18:19.802 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:19.802 "is_configured": true, 00:18:19.802 "data_offset": 256, 00:18:19.802 "data_size": 7936 00:18:19.802 } 00:18:19.802 ] 00:18:19.802 }' 00:18:19.802 19:16:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:19.802 19:16:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:20.372 19:16:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:20.372 19:16:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.372 19:16:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:20.372 [2024-11-27 19:16:29.835982] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:20.372 [2024-11-27 19:16:29.836044] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:20.372 [2024-11-27 19:16:29.836110] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:20.372 [2024-11-27 19:16:29.836159] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:20.372 [2024-11-27 19:16:29.836192] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:18:20.372 19:16:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.372 19:16:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:20.372 19:16:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.372 19:16:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:18:20.372 19:16:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:20.372 19:16:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.372 19:16:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:18:20.372 19:16:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:18:20.372 19:16:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:18:20.372 19:16:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:20.372 19:16:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:18:20.372 19:16:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.372 19:16:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:20.372 19:16:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.372 19:16:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:18:20.372 19:16:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:20.372 19:16:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:18:20.372 19:16:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:18:20.372 19:16:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@519 -- # i=1 00:18:20.372 19:16:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:20.372 19:16:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.372 19:16:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:20.372 [2024-11-27 19:16:29.907904] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:20.372 [2024-11-27 19:16:29.907949] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:20.372 [2024-11-27 19:16:29.907980] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:18:20.372 [2024-11-27 19:16:29.907989] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:20.372 [2024-11-27 19:16:29.909994] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:20.372 [2024-11-27 19:16:29.910031] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:20.372 [2024-11-27 19:16:29.910091] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:20.372 [2024-11-27 19:16:29.910131] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:20.372 [2024-11-27 19:16:29.910224] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:18:20.372 [2024-11-27 19:16:29.910241] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:20.372 [2024-11-27 19:16:29.910444] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:20.372 [2024-11-27 19:16:29.910579] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:18:20.372 [2024-11-27 19:16:29.910587] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:18:20.372 [2024-11-27 19:16:29.910730] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:20.372 pt2 00:18:20.372 19:16:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.372 19:16:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:20.372 19:16:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:20.372 19:16:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:20.372 19:16:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:20.372 19:16:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:20.372 19:16:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:20.372 19:16:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:20.372 19:16:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:20.372 19:16:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:20.372 19:16:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:20.372 19:16:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:20.372 19:16:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:20.372 19:16:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.373 19:16:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:20.373 19:16:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.373 19:16:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:20.373 "name": "raid_bdev1", 00:18:20.373 "uuid": "ead3bc77-c682-424b-bcc1-6da97d0f48bf", 00:18:20.373 "strip_size_kb": 0, 00:18:20.373 "state": "online", 00:18:20.373 "raid_level": "raid1", 00:18:20.373 "superblock": true, 00:18:20.373 "num_base_bdevs": 2, 00:18:20.373 "num_base_bdevs_discovered": 1, 00:18:20.373 "num_base_bdevs_operational": 1, 00:18:20.373 "base_bdevs_list": [ 00:18:20.373 { 00:18:20.373 "name": null, 00:18:20.373 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:20.373 "is_configured": false, 00:18:20.373 "data_offset": 256, 00:18:20.373 "data_size": 7936 00:18:20.373 }, 00:18:20.373 { 00:18:20.373 "name": "pt2", 00:18:20.373 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:20.373 "is_configured": true, 00:18:20.373 "data_offset": 256, 00:18:20.373 "data_size": 7936 00:18:20.373 } 00:18:20.373 ] 00:18:20.373 }' 00:18:20.373 19:16:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:20.373 19:16:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:20.942 19:16:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:20.942 19:16:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.942 19:16:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:20.942 [2024-11-27 19:16:30.383277] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:20.942 [2024-11-27 19:16:30.383342] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:20.942 [2024-11-27 19:16:30.383401] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:20.942 [2024-11-27 19:16:30.383450] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:20.942 [2024-11-27 19:16:30.383480] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:18:20.942 19:16:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.942 19:16:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:20.942 19:16:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.942 19:16:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:18:20.942 19:16:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:20.942 19:16:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.942 19:16:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:18:20.942 19:16:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:18:20.942 19:16:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:18:20.942 19:16:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:20.942 19:16:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.942 19:16:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:20.942 [2024-11-27 19:16:30.443195] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:20.942 [2024-11-27 19:16:30.443274] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:20.942 [2024-11-27 19:16:30.443305] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:18:20.942 [2024-11-27 19:16:30.443334] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:20.942 [2024-11-27 19:16:30.445329] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:20.942 [2024-11-27 19:16:30.445400] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:20.942 [2024-11-27 19:16:30.445481] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:20.942 [2024-11-27 19:16:30.445534] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:20.942 [2024-11-27 19:16:30.445684] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:18:20.942 [2024-11-27 19:16:30.445748] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:20.942 [2024-11-27 19:16:30.445779] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:18:20.942 [2024-11-27 19:16:30.445874] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:20.942 [2024-11-27 19:16:30.445963] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:18:20.942 [2024-11-27 19:16:30.445997] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:20.942 [2024-11-27 19:16:30.446235] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:20.942 [2024-11-27 19:16:30.446403] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:18:20.942 [2024-11-27 19:16:30.446446] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:18:20.942 [2024-11-27 19:16:30.446620] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:20.942 pt1 00:18:20.942 19:16:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.942 19:16:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:18:20.942 19:16:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:20.942 19:16:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:20.942 19:16:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:20.942 19:16:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:20.942 19:16:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:20.942 19:16:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:20.942 19:16:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:20.942 19:16:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:20.942 19:16:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:20.942 19:16:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:20.942 19:16:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:20.942 19:16:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:20.942 19:16:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.942 19:16:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:20.942 19:16:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.942 19:16:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:20.942 "name": "raid_bdev1", 00:18:20.942 "uuid": "ead3bc77-c682-424b-bcc1-6da97d0f48bf", 00:18:20.942 "strip_size_kb": 0, 00:18:20.942 "state": "online", 00:18:20.942 "raid_level": "raid1", 00:18:20.942 "superblock": true, 00:18:20.942 "num_base_bdevs": 2, 00:18:20.942 "num_base_bdevs_discovered": 1, 00:18:20.942 "num_base_bdevs_operational": 1, 00:18:20.942 "base_bdevs_list": [ 00:18:20.942 { 00:18:20.942 "name": null, 00:18:20.942 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:20.942 "is_configured": false, 00:18:20.942 "data_offset": 256, 00:18:20.942 "data_size": 7936 00:18:20.942 }, 00:18:20.942 { 00:18:20.942 "name": "pt2", 00:18:20.942 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:20.942 "is_configured": true, 00:18:20.942 "data_offset": 256, 00:18:20.942 "data_size": 7936 00:18:20.942 } 00:18:20.942 ] 00:18:20.942 }' 00:18:20.942 19:16:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:20.942 19:16:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:21.511 19:16:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:18:21.511 19:16:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:18:21.511 19:16:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.511 19:16:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:21.511 19:16:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.511 19:16:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:18:21.511 19:16:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:21.511 19:16:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.511 19:16:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:18:21.511 19:16:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:21.511 [2024-11-27 19:16:30.958493] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:21.511 19:16:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.511 19:16:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # '[' ead3bc77-c682-424b-bcc1-6da97d0f48bf '!=' ead3bc77-c682-424b-bcc1-6da97d0f48bf ']' 00:18:21.511 19:16:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@563 -- # killprocess 86244 00:18:21.511 19:16:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@954 -- # '[' -z 86244 ']' 00:18:21.511 19:16:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@958 -- # kill -0 86244 00:18:21.511 19:16:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@959 -- # uname 00:18:21.511 19:16:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:21.511 19:16:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86244 00:18:21.511 19:16:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:21.511 killing process with pid 86244 00:18:21.511 19:16:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:21.511 19:16:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86244' 00:18:21.511 19:16:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@973 -- # kill 86244 00:18:21.511 [2024-11-27 19:16:31.041617] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:21.511 [2024-11-27 19:16:31.041674] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:21.511 [2024-11-27 19:16:31.041718] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:21.511 [2024-11-27 19:16:31.041731] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:18:21.511 19:16:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@978 -- # wait 86244 00:18:21.771 [2024-11-27 19:16:31.234281] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:22.717 19:16:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@565 -- # return 0 00:18:22.717 00:18:22.717 real 0m6.144s 00:18:22.717 user 0m9.344s 00:18:22.717 sys 0m1.181s 00:18:22.717 19:16:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:22.717 ************************************ 00:18:22.717 END TEST raid_superblock_test_4k 00:18:22.717 ************************************ 00:18:22.717 19:16:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:22.717 19:16:32 bdev_raid -- bdev/bdev_raid.sh@999 -- # '[' true = true ']' 00:18:22.717 19:16:32 bdev_raid -- bdev/bdev_raid.sh@1000 -- # run_test raid_rebuild_test_sb_4k raid_rebuild_test raid1 2 true false true 00:18:22.717 19:16:32 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:18:23.003 19:16:32 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:23.003 19:16:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:23.003 ************************************ 00:18:23.003 START TEST raid_rebuild_test_sb_4k 00:18:23.003 ************************************ 00:18:23.003 19:16:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:18:23.003 19:16:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:18:23.003 19:16:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:18:23.003 19:16:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:18:23.003 19:16:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:18:23.003 19:16:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # local verify=true 00:18:23.003 19:16:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:18:23.003 19:16:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:23.003 19:16:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:18:23.003 19:16:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:23.003 19:16:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:23.003 19:16:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:18:23.003 19:16:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:23.003 19:16:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:23.003 19:16:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:23.003 19:16:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:18:23.003 19:16:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:18:23.003 19:16:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # local strip_size 00:18:23.003 19:16:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@577 -- # local create_arg 00:18:23.003 19:16:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:18:23.003 19:16:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@579 -- # local data_offset 00:18:23.003 19:16:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:18:23.003 19:16:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:18:23.003 19:16:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:18:23.003 19:16:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:18:23.003 19:16:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@597 -- # raid_pid=86571 00:18:23.003 19:16:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:18:23.003 19:16:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@598 -- # waitforlisten 86571 00:18:23.003 19:16:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@835 -- # '[' -z 86571 ']' 00:18:23.003 19:16:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:23.003 19:16:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:23.004 19:16:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:23.004 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:23.004 19:16:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:23.004 19:16:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:23.004 [2024-11-27 19:16:32.462145] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:18:23.004 [2024-11-27 19:16:32.462296] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86571 ] 00:18:23.004 I/O size of 3145728 is greater than zero copy threshold (65536). 00:18:23.004 Zero copy mechanism will not be used. 00:18:23.004 [2024-11-27 19:16:32.633619] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:23.276 [2024-11-27 19:16:32.737710] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:23.551 [2024-11-27 19:16:32.936073] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:23.551 [2024-11-27 19:16:32.936202] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:23.828 19:16:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:23.828 19:16:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # return 0 00:18:23.828 19:16:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:23.828 19:16:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1_malloc 00:18:23.828 19:16:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.828 19:16:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:23.828 BaseBdev1_malloc 00:18:23.828 19:16:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.828 19:16:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:23.828 19:16:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.828 19:16:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:23.828 [2024-11-27 19:16:33.326200] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:23.828 [2024-11-27 19:16:33.326297] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:23.828 [2024-11-27 19:16:33.326352] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:23.828 [2024-11-27 19:16:33.326383] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:23.828 [2024-11-27 19:16:33.328365] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:23.828 [2024-11-27 19:16:33.328444] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:23.828 BaseBdev1 00:18:23.828 19:16:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.828 19:16:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:23.828 19:16:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2_malloc 00:18:23.828 19:16:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.828 19:16:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:23.828 BaseBdev2_malloc 00:18:23.828 19:16:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.828 19:16:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:18:23.828 19:16:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.828 19:16:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:23.828 [2024-11-27 19:16:33.374385] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:18:23.828 [2024-11-27 19:16:33.374495] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:23.828 [2024-11-27 19:16:33.374520] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:23.828 [2024-11-27 19:16:33.374530] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:23.828 [2024-11-27 19:16:33.376507] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:23.828 [2024-11-27 19:16:33.376548] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:23.828 BaseBdev2 00:18:23.828 19:16:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.828 19:16:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -b spare_malloc 00:18:23.828 19:16:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.828 19:16:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:23.828 spare_malloc 00:18:23.828 19:16:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.828 19:16:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:18:23.828 19:16:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.829 19:16:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:24.089 spare_delay 00:18:24.089 19:16:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.089 19:16:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:24.089 19:16:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.089 19:16:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:24.089 [2024-11-27 19:16:33.472252] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:24.089 [2024-11-27 19:16:33.472348] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:24.089 [2024-11-27 19:16:33.472386] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:18:24.089 [2024-11-27 19:16:33.472396] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:24.089 [2024-11-27 19:16:33.474352] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:24.089 [2024-11-27 19:16:33.474429] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:24.089 spare 00:18:24.089 19:16:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.089 19:16:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:18:24.089 19:16:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.089 19:16:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:24.089 [2024-11-27 19:16:33.484293] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:24.089 [2024-11-27 19:16:33.485983] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:24.089 [2024-11-27 19:16:33.486164] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:24.089 [2024-11-27 19:16:33.486179] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:24.089 [2024-11-27 19:16:33.486402] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:18:24.089 [2024-11-27 19:16:33.486553] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:24.089 [2024-11-27 19:16:33.486561] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:24.089 [2024-11-27 19:16:33.486691] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:24.089 19:16:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.089 19:16:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:24.089 19:16:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:24.089 19:16:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:24.089 19:16:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:24.089 19:16:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:24.089 19:16:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:24.089 19:16:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:24.089 19:16:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:24.089 19:16:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:24.089 19:16:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:24.089 19:16:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:24.089 19:16:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:24.089 19:16:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.089 19:16:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:24.089 19:16:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.089 19:16:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:24.089 "name": "raid_bdev1", 00:18:24.089 "uuid": "60b027da-2a36-4728-8cba-0699c915e787", 00:18:24.089 "strip_size_kb": 0, 00:18:24.089 "state": "online", 00:18:24.089 "raid_level": "raid1", 00:18:24.089 "superblock": true, 00:18:24.089 "num_base_bdevs": 2, 00:18:24.089 "num_base_bdevs_discovered": 2, 00:18:24.089 "num_base_bdevs_operational": 2, 00:18:24.089 "base_bdevs_list": [ 00:18:24.089 { 00:18:24.089 "name": "BaseBdev1", 00:18:24.089 "uuid": "704f3ce8-8a73-5f6f-8d1a-6f9416bc34ff", 00:18:24.089 "is_configured": true, 00:18:24.089 "data_offset": 256, 00:18:24.089 "data_size": 7936 00:18:24.089 }, 00:18:24.089 { 00:18:24.089 "name": "BaseBdev2", 00:18:24.089 "uuid": "09032a40-0e90-58cd-8579-64c0ec2a0e6e", 00:18:24.089 "is_configured": true, 00:18:24.089 "data_offset": 256, 00:18:24.089 "data_size": 7936 00:18:24.089 } 00:18:24.089 ] 00:18:24.089 }' 00:18:24.089 19:16:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:24.089 19:16:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:24.349 19:16:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:24.349 19:16:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:18:24.349 19:16:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.349 19:16:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:24.349 [2024-11-27 19:16:33.935773] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:24.349 19:16:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.349 19:16:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:18:24.349 19:16:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:18:24.349 19:16:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:24.350 19:16:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.350 19:16:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:24.611 19:16:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.611 19:16:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:18:24.611 19:16:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:18:24.611 19:16:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:18:24.611 19:16:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:18:24.611 19:16:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:18:24.611 19:16:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:24.611 19:16:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:18:24.611 19:16:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:24.611 19:16:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:18:24.611 19:16:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:24.611 19:16:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:18:24.611 19:16:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:24.611 19:16:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:24.611 19:16:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:18:24.611 [2024-11-27 19:16:34.191070] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:24.611 /dev/nbd0 00:18:24.611 19:16:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:24.611 19:16:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:24.611 19:16:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:24.611 19:16:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:18:24.611 19:16:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:24.611 19:16:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:24.611 19:16:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:24.871 19:16:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:18:24.871 19:16:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:24.871 19:16:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:24.871 19:16:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:24.871 1+0 records in 00:18:24.871 1+0 records out 00:18:24.871 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000486211 s, 8.4 MB/s 00:18:24.871 19:16:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:24.871 19:16:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:18:24.871 19:16:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:24.871 19:16:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:24.871 19:16:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:18:24.871 19:16:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:24.871 19:16:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:24.871 19:16:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:18:24.871 19:16:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:18:24.871 19:16:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:18:25.443 7936+0 records in 00:18:25.443 7936+0 records out 00:18:25.443 32505856 bytes (33 MB, 31 MiB) copied, 0.545899 s, 59.5 MB/s 00:18:25.443 19:16:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:18:25.443 19:16:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:25.443 19:16:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:25.443 19:16:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:25.443 19:16:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:18:25.443 19:16:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:25.443 19:16:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:25.444 [2024-11-27 19:16:35.027716] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:25.444 19:16:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:25.444 19:16:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:25.444 19:16:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:25.444 19:16:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:25.444 19:16:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:25.444 19:16:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:25.444 19:16:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:18:25.444 19:16:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:18:25.444 19:16:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:18:25.444 19:16:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.444 19:16:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:25.444 [2024-11-27 19:16:35.056248] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:25.444 19:16:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.444 19:16:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:25.444 19:16:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:25.444 19:16:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:25.444 19:16:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:25.444 19:16:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:25.444 19:16:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:25.444 19:16:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:25.444 19:16:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:25.444 19:16:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:25.444 19:16:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:25.444 19:16:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:25.444 19:16:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:25.444 19:16:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.444 19:16:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:25.704 19:16:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.704 19:16:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:25.704 "name": "raid_bdev1", 00:18:25.704 "uuid": "60b027da-2a36-4728-8cba-0699c915e787", 00:18:25.704 "strip_size_kb": 0, 00:18:25.704 "state": "online", 00:18:25.704 "raid_level": "raid1", 00:18:25.704 "superblock": true, 00:18:25.704 "num_base_bdevs": 2, 00:18:25.704 "num_base_bdevs_discovered": 1, 00:18:25.704 "num_base_bdevs_operational": 1, 00:18:25.704 "base_bdevs_list": [ 00:18:25.704 { 00:18:25.704 "name": null, 00:18:25.704 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:25.704 "is_configured": false, 00:18:25.704 "data_offset": 0, 00:18:25.704 "data_size": 7936 00:18:25.704 }, 00:18:25.704 { 00:18:25.704 "name": "BaseBdev2", 00:18:25.704 "uuid": "09032a40-0e90-58cd-8579-64c0ec2a0e6e", 00:18:25.704 "is_configured": true, 00:18:25.704 "data_offset": 256, 00:18:25.704 "data_size": 7936 00:18:25.704 } 00:18:25.704 ] 00:18:25.704 }' 00:18:25.704 19:16:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:25.704 19:16:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:25.965 19:16:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:25.965 19:16:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.965 19:16:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:25.965 [2024-11-27 19:16:35.499518] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:25.965 [2024-11-27 19:16:35.515720] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:18:25.965 19:16:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.965 19:16:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@647 -- # sleep 1 00:18:25.965 [2024-11-27 19:16:35.517542] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:26.905 19:16:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:26.905 19:16:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:26.905 19:16:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:26.905 19:16:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:26.905 19:16:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:26.905 19:16:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:26.905 19:16:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.905 19:16:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:26.905 19:16:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:27.165 19:16:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.165 19:16:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:27.165 "name": "raid_bdev1", 00:18:27.165 "uuid": "60b027da-2a36-4728-8cba-0699c915e787", 00:18:27.165 "strip_size_kb": 0, 00:18:27.165 "state": "online", 00:18:27.165 "raid_level": "raid1", 00:18:27.165 "superblock": true, 00:18:27.165 "num_base_bdevs": 2, 00:18:27.165 "num_base_bdevs_discovered": 2, 00:18:27.165 "num_base_bdevs_operational": 2, 00:18:27.165 "process": { 00:18:27.165 "type": "rebuild", 00:18:27.165 "target": "spare", 00:18:27.165 "progress": { 00:18:27.165 "blocks": 2560, 00:18:27.165 "percent": 32 00:18:27.165 } 00:18:27.165 }, 00:18:27.165 "base_bdevs_list": [ 00:18:27.165 { 00:18:27.165 "name": "spare", 00:18:27.165 "uuid": "a0f1b2c7-947e-50df-8396-77f605187dbc", 00:18:27.165 "is_configured": true, 00:18:27.165 "data_offset": 256, 00:18:27.165 "data_size": 7936 00:18:27.165 }, 00:18:27.165 { 00:18:27.165 "name": "BaseBdev2", 00:18:27.165 "uuid": "09032a40-0e90-58cd-8579-64c0ec2a0e6e", 00:18:27.165 "is_configured": true, 00:18:27.165 "data_offset": 256, 00:18:27.165 "data_size": 7936 00:18:27.165 } 00:18:27.165 ] 00:18:27.166 }' 00:18:27.166 19:16:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:27.166 19:16:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:27.166 19:16:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:27.166 19:16:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:27.166 19:16:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:27.166 19:16:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.166 19:16:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:27.166 [2024-11-27 19:16:36.681195] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:27.166 [2024-11-27 19:16:36.722178] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:27.166 [2024-11-27 19:16:36.722236] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:27.166 [2024-11-27 19:16:36.722250] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:27.166 [2024-11-27 19:16:36.722258] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:27.166 19:16:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.166 19:16:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:27.166 19:16:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:27.166 19:16:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:27.166 19:16:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:27.166 19:16:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:27.166 19:16:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:27.166 19:16:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:27.166 19:16:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:27.166 19:16:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:27.166 19:16:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:27.166 19:16:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:27.166 19:16:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:27.166 19:16:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.166 19:16:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:27.166 19:16:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.426 19:16:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:27.426 "name": "raid_bdev1", 00:18:27.426 "uuid": "60b027da-2a36-4728-8cba-0699c915e787", 00:18:27.426 "strip_size_kb": 0, 00:18:27.426 "state": "online", 00:18:27.426 "raid_level": "raid1", 00:18:27.426 "superblock": true, 00:18:27.426 "num_base_bdevs": 2, 00:18:27.426 "num_base_bdevs_discovered": 1, 00:18:27.426 "num_base_bdevs_operational": 1, 00:18:27.426 "base_bdevs_list": [ 00:18:27.426 { 00:18:27.426 "name": null, 00:18:27.426 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:27.426 "is_configured": false, 00:18:27.426 "data_offset": 0, 00:18:27.426 "data_size": 7936 00:18:27.426 }, 00:18:27.426 { 00:18:27.426 "name": "BaseBdev2", 00:18:27.426 "uuid": "09032a40-0e90-58cd-8579-64c0ec2a0e6e", 00:18:27.426 "is_configured": true, 00:18:27.426 "data_offset": 256, 00:18:27.426 "data_size": 7936 00:18:27.426 } 00:18:27.426 ] 00:18:27.426 }' 00:18:27.426 19:16:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:27.426 19:16:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:27.686 19:16:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:27.686 19:16:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:27.686 19:16:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:27.686 19:16:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:27.686 19:16:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:27.686 19:16:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:27.686 19:16:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:27.686 19:16:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.686 19:16:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:27.686 19:16:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.686 19:16:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:27.686 "name": "raid_bdev1", 00:18:27.686 "uuid": "60b027da-2a36-4728-8cba-0699c915e787", 00:18:27.686 "strip_size_kb": 0, 00:18:27.686 "state": "online", 00:18:27.686 "raid_level": "raid1", 00:18:27.686 "superblock": true, 00:18:27.686 "num_base_bdevs": 2, 00:18:27.686 "num_base_bdevs_discovered": 1, 00:18:27.686 "num_base_bdevs_operational": 1, 00:18:27.686 "base_bdevs_list": [ 00:18:27.686 { 00:18:27.686 "name": null, 00:18:27.686 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:27.686 "is_configured": false, 00:18:27.686 "data_offset": 0, 00:18:27.686 "data_size": 7936 00:18:27.686 }, 00:18:27.686 { 00:18:27.686 "name": "BaseBdev2", 00:18:27.686 "uuid": "09032a40-0e90-58cd-8579-64c0ec2a0e6e", 00:18:27.686 "is_configured": true, 00:18:27.686 "data_offset": 256, 00:18:27.686 "data_size": 7936 00:18:27.686 } 00:18:27.686 ] 00:18:27.686 }' 00:18:27.686 19:16:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:27.686 19:16:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:27.686 19:16:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:27.946 19:16:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:27.946 19:16:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:27.946 19:16:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.946 19:16:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:27.946 [2024-11-27 19:16:37.346228] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:27.946 [2024-11-27 19:16:37.362043] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:18:27.946 19:16:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.946 19:16:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@663 -- # sleep 1 00:18:27.946 [2024-11-27 19:16:37.363881] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:28.886 19:16:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:28.886 19:16:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:28.886 19:16:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:28.886 19:16:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:28.886 19:16:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:28.886 19:16:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:28.886 19:16:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:28.886 19:16:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.886 19:16:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:28.886 19:16:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.886 19:16:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:28.886 "name": "raid_bdev1", 00:18:28.886 "uuid": "60b027da-2a36-4728-8cba-0699c915e787", 00:18:28.886 "strip_size_kb": 0, 00:18:28.886 "state": "online", 00:18:28.886 "raid_level": "raid1", 00:18:28.886 "superblock": true, 00:18:28.886 "num_base_bdevs": 2, 00:18:28.886 "num_base_bdevs_discovered": 2, 00:18:28.886 "num_base_bdevs_operational": 2, 00:18:28.886 "process": { 00:18:28.886 "type": "rebuild", 00:18:28.886 "target": "spare", 00:18:28.886 "progress": { 00:18:28.886 "blocks": 2560, 00:18:28.886 "percent": 32 00:18:28.886 } 00:18:28.886 }, 00:18:28.886 "base_bdevs_list": [ 00:18:28.886 { 00:18:28.886 "name": "spare", 00:18:28.886 "uuid": "a0f1b2c7-947e-50df-8396-77f605187dbc", 00:18:28.886 "is_configured": true, 00:18:28.886 "data_offset": 256, 00:18:28.886 "data_size": 7936 00:18:28.886 }, 00:18:28.886 { 00:18:28.886 "name": "BaseBdev2", 00:18:28.886 "uuid": "09032a40-0e90-58cd-8579-64c0ec2a0e6e", 00:18:28.886 "is_configured": true, 00:18:28.886 "data_offset": 256, 00:18:28.886 "data_size": 7936 00:18:28.886 } 00:18:28.886 ] 00:18:28.886 }' 00:18:28.886 19:16:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:28.886 19:16:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:28.886 19:16:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:28.886 19:16:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:28.886 19:16:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:18:28.886 19:16:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:18:28.886 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:18:28.886 19:16:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:18:28.886 19:16:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:18:28.886 19:16:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:18:28.886 19:16:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # local timeout=680 00:18:28.886 19:16:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:28.886 19:16:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:28.886 19:16:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:28.886 19:16:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:28.886 19:16:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:28.886 19:16:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:29.147 19:16:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:29.147 19:16:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:29.147 19:16:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.147 19:16:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:29.147 19:16:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.147 19:16:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:29.147 "name": "raid_bdev1", 00:18:29.147 "uuid": "60b027da-2a36-4728-8cba-0699c915e787", 00:18:29.147 "strip_size_kb": 0, 00:18:29.147 "state": "online", 00:18:29.147 "raid_level": "raid1", 00:18:29.147 "superblock": true, 00:18:29.147 "num_base_bdevs": 2, 00:18:29.147 "num_base_bdevs_discovered": 2, 00:18:29.147 "num_base_bdevs_operational": 2, 00:18:29.147 "process": { 00:18:29.147 "type": "rebuild", 00:18:29.147 "target": "spare", 00:18:29.147 "progress": { 00:18:29.147 "blocks": 2816, 00:18:29.147 "percent": 35 00:18:29.147 } 00:18:29.147 }, 00:18:29.147 "base_bdevs_list": [ 00:18:29.147 { 00:18:29.147 "name": "spare", 00:18:29.147 "uuid": "a0f1b2c7-947e-50df-8396-77f605187dbc", 00:18:29.147 "is_configured": true, 00:18:29.147 "data_offset": 256, 00:18:29.147 "data_size": 7936 00:18:29.147 }, 00:18:29.147 { 00:18:29.147 "name": "BaseBdev2", 00:18:29.147 "uuid": "09032a40-0e90-58cd-8579-64c0ec2a0e6e", 00:18:29.147 "is_configured": true, 00:18:29.147 "data_offset": 256, 00:18:29.147 "data_size": 7936 00:18:29.147 } 00:18:29.147 ] 00:18:29.147 }' 00:18:29.147 19:16:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:29.147 19:16:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:29.147 19:16:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:29.147 19:16:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:29.147 19:16:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:30.105 19:16:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:30.105 19:16:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:30.105 19:16:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:30.105 19:16:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:30.105 19:16:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:30.105 19:16:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:30.105 19:16:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:30.105 19:16:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.105 19:16:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:30.105 19:16:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:30.105 19:16:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.105 19:16:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:30.105 "name": "raid_bdev1", 00:18:30.105 "uuid": "60b027da-2a36-4728-8cba-0699c915e787", 00:18:30.105 "strip_size_kb": 0, 00:18:30.105 "state": "online", 00:18:30.105 "raid_level": "raid1", 00:18:30.105 "superblock": true, 00:18:30.105 "num_base_bdevs": 2, 00:18:30.105 "num_base_bdevs_discovered": 2, 00:18:30.105 "num_base_bdevs_operational": 2, 00:18:30.105 "process": { 00:18:30.105 "type": "rebuild", 00:18:30.105 "target": "spare", 00:18:30.105 "progress": { 00:18:30.105 "blocks": 5888, 00:18:30.105 "percent": 74 00:18:30.105 } 00:18:30.105 }, 00:18:30.105 "base_bdevs_list": [ 00:18:30.105 { 00:18:30.105 "name": "spare", 00:18:30.105 "uuid": "a0f1b2c7-947e-50df-8396-77f605187dbc", 00:18:30.105 "is_configured": true, 00:18:30.105 "data_offset": 256, 00:18:30.105 "data_size": 7936 00:18:30.105 }, 00:18:30.105 { 00:18:30.105 "name": "BaseBdev2", 00:18:30.105 "uuid": "09032a40-0e90-58cd-8579-64c0ec2a0e6e", 00:18:30.105 "is_configured": true, 00:18:30.105 "data_offset": 256, 00:18:30.105 "data_size": 7936 00:18:30.105 } 00:18:30.105 ] 00:18:30.105 }' 00:18:30.105 19:16:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:30.366 19:16:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:30.366 19:16:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:30.366 19:16:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:30.366 19:16:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:30.935 [2024-11-27 19:16:40.475739] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:18:30.935 [2024-11-27 19:16:40.475829] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:18:30.935 [2024-11-27 19:16:40.475927] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:31.196 19:16:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:31.196 19:16:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:31.196 19:16:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:31.196 19:16:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:31.196 19:16:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:31.196 19:16:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:31.196 19:16:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:31.196 19:16:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.196 19:16:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:31.196 19:16:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:31.456 19:16:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.456 19:16:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:31.456 "name": "raid_bdev1", 00:18:31.456 "uuid": "60b027da-2a36-4728-8cba-0699c915e787", 00:18:31.456 "strip_size_kb": 0, 00:18:31.456 "state": "online", 00:18:31.456 "raid_level": "raid1", 00:18:31.456 "superblock": true, 00:18:31.456 "num_base_bdevs": 2, 00:18:31.456 "num_base_bdevs_discovered": 2, 00:18:31.456 "num_base_bdevs_operational": 2, 00:18:31.456 "base_bdevs_list": [ 00:18:31.456 { 00:18:31.456 "name": "spare", 00:18:31.456 "uuid": "a0f1b2c7-947e-50df-8396-77f605187dbc", 00:18:31.456 "is_configured": true, 00:18:31.456 "data_offset": 256, 00:18:31.456 "data_size": 7936 00:18:31.456 }, 00:18:31.456 { 00:18:31.456 "name": "BaseBdev2", 00:18:31.456 "uuid": "09032a40-0e90-58cd-8579-64c0ec2a0e6e", 00:18:31.456 "is_configured": true, 00:18:31.456 "data_offset": 256, 00:18:31.456 "data_size": 7936 00:18:31.456 } 00:18:31.456 ] 00:18:31.456 }' 00:18:31.456 19:16:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:31.456 19:16:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:18:31.456 19:16:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:31.456 19:16:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:18:31.456 19:16:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@709 -- # break 00:18:31.456 19:16:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:31.456 19:16:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:31.456 19:16:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:31.456 19:16:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:31.456 19:16:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:31.456 19:16:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:31.456 19:16:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.456 19:16:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:31.456 19:16:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:31.456 19:16:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.456 19:16:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:31.456 "name": "raid_bdev1", 00:18:31.456 "uuid": "60b027da-2a36-4728-8cba-0699c915e787", 00:18:31.456 "strip_size_kb": 0, 00:18:31.456 "state": "online", 00:18:31.456 "raid_level": "raid1", 00:18:31.456 "superblock": true, 00:18:31.456 "num_base_bdevs": 2, 00:18:31.456 "num_base_bdevs_discovered": 2, 00:18:31.456 "num_base_bdevs_operational": 2, 00:18:31.456 "base_bdevs_list": [ 00:18:31.456 { 00:18:31.456 "name": "spare", 00:18:31.456 "uuid": "a0f1b2c7-947e-50df-8396-77f605187dbc", 00:18:31.456 "is_configured": true, 00:18:31.456 "data_offset": 256, 00:18:31.456 "data_size": 7936 00:18:31.456 }, 00:18:31.456 { 00:18:31.456 "name": "BaseBdev2", 00:18:31.456 "uuid": "09032a40-0e90-58cd-8579-64c0ec2a0e6e", 00:18:31.456 "is_configured": true, 00:18:31.456 "data_offset": 256, 00:18:31.456 "data_size": 7936 00:18:31.456 } 00:18:31.456 ] 00:18:31.456 }' 00:18:31.456 19:16:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:31.456 19:16:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:31.456 19:16:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:31.717 19:16:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:31.717 19:16:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:31.717 19:16:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:31.717 19:16:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:31.717 19:16:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:31.717 19:16:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:31.717 19:16:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:31.717 19:16:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:31.717 19:16:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:31.717 19:16:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:31.717 19:16:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:31.717 19:16:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:31.717 19:16:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:31.717 19:16:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.717 19:16:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:31.717 19:16:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.717 19:16:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:31.717 "name": "raid_bdev1", 00:18:31.717 "uuid": "60b027da-2a36-4728-8cba-0699c915e787", 00:18:31.717 "strip_size_kb": 0, 00:18:31.717 "state": "online", 00:18:31.717 "raid_level": "raid1", 00:18:31.717 "superblock": true, 00:18:31.717 "num_base_bdevs": 2, 00:18:31.717 "num_base_bdevs_discovered": 2, 00:18:31.717 "num_base_bdevs_operational": 2, 00:18:31.717 "base_bdevs_list": [ 00:18:31.717 { 00:18:31.717 "name": "spare", 00:18:31.717 "uuid": "a0f1b2c7-947e-50df-8396-77f605187dbc", 00:18:31.717 "is_configured": true, 00:18:31.717 "data_offset": 256, 00:18:31.717 "data_size": 7936 00:18:31.717 }, 00:18:31.717 { 00:18:31.717 "name": "BaseBdev2", 00:18:31.717 "uuid": "09032a40-0e90-58cd-8579-64c0ec2a0e6e", 00:18:31.717 "is_configured": true, 00:18:31.717 "data_offset": 256, 00:18:31.717 "data_size": 7936 00:18:31.717 } 00:18:31.717 ] 00:18:31.717 }' 00:18:31.717 19:16:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:31.717 19:16:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:31.978 19:16:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:31.978 19:16:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.978 19:16:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:31.978 [2024-11-27 19:16:41.583950] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:31.978 [2024-11-27 19:16:41.584023] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:31.978 [2024-11-27 19:16:41.584119] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:31.978 [2024-11-27 19:16:41.584222] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:31.978 [2024-11-27 19:16:41.584268] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:31.978 19:16:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.978 19:16:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:31.978 19:16:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # jq length 00:18:31.978 19:16:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.978 19:16:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:31.978 19:16:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.238 19:16:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:18:32.238 19:16:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:18:32.238 19:16:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:18:32.238 19:16:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:18:32.238 19:16:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:32.238 19:16:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:18:32.238 19:16:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:32.238 19:16:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:32.238 19:16:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:32.238 19:16:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:18:32.238 19:16:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:32.238 19:16:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:32.238 19:16:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:18:32.238 /dev/nbd0 00:18:32.238 19:16:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:32.499 19:16:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:32.499 19:16:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:32.499 19:16:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:18:32.499 19:16:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:32.499 19:16:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:32.499 19:16:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:32.499 19:16:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:18:32.499 19:16:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:32.499 19:16:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:32.499 19:16:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:32.499 1+0 records in 00:18:32.499 1+0 records out 00:18:32.499 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000347995 s, 11.8 MB/s 00:18:32.499 19:16:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:32.499 19:16:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:18:32.499 19:16:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:32.499 19:16:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:32.499 19:16:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:18:32.499 19:16:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:32.499 19:16:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:32.499 19:16:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:18:32.499 /dev/nbd1 00:18:32.499 19:16:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:32.499 19:16:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:32.499 19:16:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:18:32.499 19:16:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:18:32.499 19:16:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:32.499 19:16:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:32.499 19:16:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:18:32.499 19:16:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:18:32.499 19:16:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:32.499 19:16:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:32.499 19:16:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:32.499 1+0 records in 00:18:32.499 1+0 records out 00:18:32.499 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0002373 s, 17.3 MB/s 00:18:32.499 19:16:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:32.760 19:16:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:18:32.760 19:16:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:32.760 19:16:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:32.760 19:16:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:18:32.760 19:16:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:32.760 19:16:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:32.760 19:16:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:18:32.760 19:16:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:18:32.760 19:16:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:32.760 19:16:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:32.760 19:16:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:32.760 19:16:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:18:32.760 19:16:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:32.760 19:16:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:33.020 19:16:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:33.020 19:16:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:33.020 19:16:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:33.020 19:16:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:33.020 19:16:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:33.020 19:16:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:33.020 19:16:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:18:33.020 19:16:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:18:33.020 19:16:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:33.020 19:16:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:18:33.281 19:16:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:33.281 19:16:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:33.281 19:16:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:33.281 19:16:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:33.281 19:16:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:33.281 19:16:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:33.281 19:16:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:18:33.281 19:16:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:18:33.281 19:16:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:18:33.281 19:16:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:18:33.281 19:16:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.281 19:16:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:33.281 19:16:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.281 19:16:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:33.281 19:16:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.281 19:16:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:33.281 [2024-11-27 19:16:42.788555] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:33.281 [2024-11-27 19:16:42.788655] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:33.281 [2024-11-27 19:16:42.788684] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:18:33.281 [2024-11-27 19:16:42.788702] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:33.281 [2024-11-27 19:16:42.790774] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:33.281 [2024-11-27 19:16:42.790809] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:33.281 [2024-11-27 19:16:42.790894] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:33.281 [2024-11-27 19:16:42.790943] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:33.281 [2024-11-27 19:16:42.791088] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:33.281 spare 00:18:33.281 19:16:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.281 19:16:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:18:33.281 19:16:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.281 19:16:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:33.281 [2024-11-27 19:16:42.890978] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:18:33.281 [2024-11-27 19:16:42.891057] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:33.281 [2024-11-27 19:16:42.891332] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:18:33.281 [2024-11-27 19:16:42.891546] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:18:33.281 [2024-11-27 19:16:42.891592] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:18:33.281 [2024-11-27 19:16:42.891828] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:33.281 19:16:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.281 19:16:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:33.281 19:16:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:33.281 19:16:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:33.281 19:16:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:33.281 19:16:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:33.281 19:16:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:33.281 19:16:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:33.281 19:16:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:33.281 19:16:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:33.281 19:16:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:33.281 19:16:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:33.281 19:16:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:33.281 19:16:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.281 19:16:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:33.541 19:16:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.541 19:16:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:33.541 "name": "raid_bdev1", 00:18:33.541 "uuid": "60b027da-2a36-4728-8cba-0699c915e787", 00:18:33.541 "strip_size_kb": 0, 00:18:33.541 "state": "online", 00:18:33.541 "raid_level": "raid1", 00:18:33.541 "superblock": true, 00:18:33.541 "num_base_bdevs": 2, 00:18:33.541 "num_base_bdevs_discovered": 2, 00:18:33.541 "num_base_bdevs_operational": 2, 00:18:33.541 "base_bdevs_list": [ 00:18:33.541 { 00:18:33.541 "name": "spare", 00:18:33.541 "uuid": "a0f1b2c7-947e-50df-8396-77f605187dbc", 00:18:33.541 "is_configured": true, 00:18:33.541 "data_offset": 256, 00:18:33.541 "data_size": 7936 00:18:33.541 }, 00:18:33.541 { 00:18:33.541 "name": "BaseBdev2", 00:18:33.541 "uuid": "09032a40-0e90-58cd-8579-64c0ec2a0e6e", 00:18:33.541 "is_configured": true, 00:18:33.541 "data_offset": 256, 00:18:33.541 "data_size": 7936 00:18:33.541 } 00:18:33.541 ] 00:18:33.541 }' 00:18:33.541 19:16:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:33.541 19:16:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:33.801 19:16:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:33.801 19:16:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:33.801 19:16:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:33.801 19:16:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:33.801 19:16:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:33.801 19:16:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:33.801 19:16:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.801 19:16:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:33.801 19:16:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:33.801 19:16:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.801 19:16:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:33.801 "name": "raid_bdev1", 00:18:33.801 "uuid": "60b027da-2a36-4728-8cba-0699c915e787", 00:18:33.801 "strip_size_kb": 0, 00:18:33.801 "state": "online", 00:18:33.801 "raid_level": "raid1", 00:18:33.801 "superblock": true, 00:18:33.801 "num_base_bdevs": 2, 00:18:33.801 "num_base_bdevs_discovered": 2, 00:18:33.801 "num_base_bdevs_operational": 2, 00:18:33.801 "base_bdevs_list": [ 00:18:33.801 { 00:18:33.801 "name": "spare", 00:18:33.801 "uuid": "a0f1b2c7-947e-50df-8396-77f605187dbc", 00:18:33.801 "is_configured": true, 00:18:33.801 "data_offset": 256, 00:18:33.801 "data_size": 7936 00:18:33.801 }, 00:18:33.801 { 00:18:33.801 "name": "BaseBdev2", 00:18:33.801 "uuid": "09032a40-0e90-58cd-8579-64c0ec2a0e6e", 00:18:33.801 "is_configured": true, 00:18:33.801 "data_offset": 256, 00:18:33.801 "data_size": 7936 00:18:33.801 } 00:18:33.801 ] 00:18:33.801 }' 00:18:33.801 19:16:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:33.801 19:16:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:33.801 19:16:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:34.062 19:16:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:34.062 19:16:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:18:34.062 19:16:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:34.062 19:16:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.062 19:16:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:34.062 19:16:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.062 19:16:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:18:34.062 19:16:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:34.062 19:16:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.062 19:16:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:34.062 [2024-11-27 19:16:43.511542] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:34.062 19:16:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.062 19:16:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:34.062 19:16:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:34.062 19:16:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:34.062 19:16:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:34.062 19:16:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:34.062 19:16:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:34.062 19:16:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:34.062 19:16:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:34.062 19:16:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:34.062 19:16:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:34.062 19:16:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:34.062 19:16:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.062 19:16:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:34.062 19:16:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:34.062 19:16:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.062 19:16:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:34.062 "name": "raid_bdev1", 00:18:34.062 "uuid": "60b027da-2a36-4728-8cba-0699c915e787", 00:18:34.062 "strip_size_kb": 0, 00:18:34.062 "state": "online", 00:18:34.062 "raid_level": "raid1", 00:18:34.062 "superblock": true, 00:18:34.062 "num_base_bdevs": 2, 00:18:34.062 "num_base_bdevs_discovered": 1, 00:18:34.062 "num_base_bdevs_operational": 1, 00:18:34.062 "base_bdevs_list": [ 00:18:34.062 { 00:18:34.062 "name": null, 00:18:34.062 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:34.062 "is_configured": false, 00:18:34.062 "data_offset": 0, 00:18:34.062 "data_size": 7936 00:18:34.062 }, 00:18:34.062 { 00:18:34.062 "name": "BaseBdev2", 00:18:34.062 "uuid": "09032a40-0e90-58cd-8579-64c0ec2a0e6e", 00:18:34.062 "is_configured": true, 00:18:34.062 "data_offset": 256, 00:18:34.062 "data_size": 7936 00:18:34.062 } 00:18:34.062 ] 00:18:34.062 }' 00:18:34.062 19:16:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:34.062 19:16:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:34.632 19:16:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:34.632 19:16:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.632 19:16:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:34.632 [2024-11-27 19:16:43.990770] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:34.632 [2024-11-27 19:16:43.990994] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:34.632 [2024-11-27 19:16:43.991052] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:34.632 [2024-11-27 19:16:43.991101] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:34.632 [2024-11-27 19:16:44.006536] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:18:34.632 19:16:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.632 19:16:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@757 -- # sleep 1 00:18:34.632 [2024-11-27 19:16:44.008481] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:35.572 19:16:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:35.572 19:16:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:35.572 19:16:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:35.572 19:16:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:35.572 19:16:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:35.572 19:16:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:35.572 19:16:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:35.572 19:16:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.572 19:16:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:35.572 19:16:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.572 19:16:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:35.572 "name": "raid_bdev1", 00:18:35.572 "uuid": "60b027da-2a36-4728-8cba-0699c915e787", 00:18:35.572 "strip_size_kb": 0, 00:18:35.572 "state": "online", 00:18:35.572 "raid_level": "raid1", 00:18:35.572 "superblock": true, 00:18:35.572 "num_base_bdevs": 2, 00:18:35.572 "num_base_bdevs_discovered": 2, 00:18:35.572 "num_base_bdevs_operational": 2, 00:18:35.572 "process": { 00:18:35.572 "type": "rebuild", 00:18:35.572 "target": "spare", 00:18:35.572 "progress": { 00:18:35.572 "blocks": 2560, 00:18:35.572 "percent": 32 00:18:35.572 } 00:18:35.572 }, 00:18:35.572 "base_bdevs_list": [ 00:18:35.572 { 00:18:35.572 "name": "spare", 00:18:35.572 "uuid": "a0f1b2c7-947e-50df-8396-77f605187dbc", 00:18:35.572 "is_configured": true, 00:18:35.572 "data_offset": 256, 00:18:35.572 "data_size": 7936 00:18:35.572 }, 00:18:35.572 { 00:18:35.572 "name": "BaseBdev2", 00:18:35.572 "uuid": "09032a40-0e90-58cd-8579-64c0ec2a0e6e", 00:18:35.572 "is_configured": true, 00:18:35.572 "data_offset": 256, 00:18:35.572 "data_size": 7936 00:18:35.572 } 00:18:35.572 ] 00:18:35.572 }' 00:18:35.572 19:16:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:35.572 19:16:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:35.572 19:16:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:35.572 19:16:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:35.572 19:16:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:18:35.572 19:16:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.572 19:16:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:35.572 [2024-11-27 19:16:45.147821] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:35.833 [2024-11-27 19:16:45.213184] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:35.833 [2024-11-27 19:16:45.213287] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:35.833 [2024-11-27 19:16:45.213317] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:35.833 [2024-11-27 19:16:45.213338] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:35.833 19:16:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.833 19:16:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:35.833 19:16:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:35.833 19:16:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:35.833 19:16:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:35.833 19:16:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:35.833 19:16:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:35.833 19:16:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:35.833 19:16:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:35.833 19:16:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:35.833 19:16:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:35.833 19:16:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:35.833 19:16:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.833 19:16:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:35.833 19:16:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:35.833 19:16:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.833 19:16:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:35.833 "name": "raid_bdev1", 00:18:35.833 "uuid": "60b027da-2a36-4728-8cba-0699c915e787", 00:18:35.833 "strip_size_kb": 0, 00:18:35.833 "state": "online", 00:18:35.833 "raid_level": "raid1", 00:18:35.833 "superblock": true, 00:18:35.833 "num_base_bdevs": 2, 00:18:35.833 "num_base_bdevs_discovered": 1, 00:18:35.833 "num_base_bdevs_operational": 1, 00:18:35.833 "base_bdevs_list": [ 00:18:35.833 { 00:18:35.833 "name": null, 00:18:35.833 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:35.833 "is_configured": false, 00:18:35.833 "data_offset": 0, 00:18:35.833 "data_size": 7936 00:18:35.833 }, 00:18:35.833 { 00:18:35.833 "name": "BaseBdev2", 00:18:35.833 "uuid": "09032a40-0e90-58cd-8579-64c0ec2a0e6e", 00:18:35.833 "is_configured": true, 00:18:35.833 "data_offset": 256, 00:18:35.833 "data_size": 7936 00:18:35.833 } 00:18:35.833 ] 00:18:35.833 }' 00:18:35.833 19:16:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:35.833 19:16:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:36.094 19:16:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:36.094 19:16:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.094 19:16:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:36.094 [2024-11-27 19:16:45.698420] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:36.094 [2024-11-27 19:16:45.698521] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:36.094 [2024-11-27 19:16:45.698558] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:18:36.094 [2024-11-27 19:16:45.698587] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:36.094 [2024-11-27 19:16:45.699053] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:36.094 [2024-11-27 19:16:45.699115] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:36.094 [2024-11-27 19:16:45.699220] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:36.094 [2024-11-27 19:16:45.699251] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:36.094 [2024-11-27 19:16:45.699297] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:36.094 [2024-11-27 19:16:45.699350] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:36.094 [2024-11-27 19:16:45.714765] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:18:36.094 spare 00:18:36.094 19:16:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.094 [2024-11-27 19:16:45.716628] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:36.094 19:16:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@764 -- # sleep 1 00:18:37.476 19:16:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:37.476 19:16:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:37.476 19:16:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:37.476 19:16:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:37.476 19:16:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:37.476 19:16:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:37.476 19:16:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.476 19:16:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:37.476 19:16:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:37.476 19:16:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.476 19:16:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:37.476 "name": "raid_bdev1", 00:18:37.476 "uuid": "60b027da-2a36-4728-8cba-0699c915e787", 00:18:37.476 "strip_size_kb": 0, 00:18:37.476 "state": "online", 00:18:37.476 "raid_level": "raid1", 00:18:37.476 "superblock": true, 00:18:37.476 "num_base_bdevs": 2, 00:18:37.476 "num_base_bdevs_discovered": 2, 00:18:37.476 "num_base_bdevs_operational": 2, 00:18:37.476 "process": { 00:18:37.476 "type": "rebuild", 00:18:37.476 "target": "spare", 00:18:37.476 "progress": { 00:18:37.476 "blocks": 2560, 00:18:37.476 "percent": 32 00:18:37.476 } 00:18:37.476 }, 00:18:37.476 "base_bdevs_list": [ 00:18:37.476 { 00:18:37.476 "name": "spare", 00:18:37.476 "uuid": "a0f1b2c7-947e-50df-8396-77f605187dbc", 00:18:37.476 "is_configured": true, 00:18:37.476 "data_offset": 256, 00:18:37.476 "data_size": 7936 00:18:37.476 }, 00:18:37.476 { 00:18:37.476 "name": "BaseBdev2", 00:18:37.476 "uuid": "09032a40-0e90-58cd-8579-64c0ec2a0e6e", 00:18:37.476 "is_configured": true, 00:18:37.476 "data_offset": 256, 00:18:37.476 "data_size": 7936 00:18:37.476 } 00:18:37.476 ] 00:18:37.476 }' 00:18:37.476 19:16:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:37.476 19:16:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:37.476 19:16:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:37.476 19:16:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:37.476 19:16:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:18:37.476 19:16:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.476 19:16:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:37.476 [2024-11-27 19:16:46.880343] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:37.476 [2024-11-27 19:16:46.921219] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:37.476 [2024-11-27 19:16:46.921274] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:37.476 [2024-11-27 19:16:46.921292] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:37.476 [2024-11-27 19:16:46.921298] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:37.476 19:16:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.476 19:16:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:37.476 19:16:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:37.476 19:16:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:37.476 19:16:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:37.476 19:16:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:37.476 19:16:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:37.476 19:16:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:37.476 19:16:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:37.476 19:16:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:37.476 19:16:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:37.476 19:16:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:37.476 19:16:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:37.476 19:16:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.476 19:16:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:37.476 19:16:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.476 19:16:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:37.476 "name": "raid_bdev1", 00:18:37.476 "uuid": "60b027da-2a36-4728-8cba-0699c915e787", 00:18:37.476 "strip_size_kb": 0, 00:18:37.476 "state": "online", 00:18:37.476 "raid_level": "raid1", 00:18:37.476 "superblock": true, 00:18:37.476 "num_base_bdevs": 2, 00:18:37.476 "num_base_bdevs_discovered": 1, 00:18:37.476 "num_base_bdevs_operational": 1, 00:18:37.477 "base_bdevs_list": [ 00:18:37.477 { 00:18:37.477 "name": null, 00:18:37.477 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:37.477 "is_configured": false, 00:18:37.477 "data_offset": 0, 00:18:37.477 "data_size": 7936 00:18:37.477 }, 00:18:37.477 { 00:18:37.477 "name": "BaseBdev2", 00:18:37.477 "uuid": "09032a40-0e90-58cd-8579-64c0ec2a0e6e", 00:18:37.477 "is_configured": true, 00:18:37.477 "data_offset": 256, 00:18:37.477 "data_size": 7936 00:18:37.477 } 00:18:37.477 ] 00:18:37.477 }' 00:18:37.477 19:16:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:37.477 19:16:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:38.048 19:16:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:38.048 19:16:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:38.048 19:16:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:38.048 19:16:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:38.048 19:16:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:38.048 19:16:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:38.048 19:16:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:38.048 19:16:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.048 19:16:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:38.048 19:16:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.048 19:16:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:38.048 "name": "raid_bdev1", 00:18:38.048 "uuid": "60b027da-2a36-4728-8cba-0699c915e787", 00:18:38.048 "strip_size_kb": 0, 00:18:38.048 "state": "online", 00:18:38.048 "raid_level": "raid1", 00:18:38.048 "superblock": true, 00:18:38.048 "num_base_bdevs": 2, 00:18:38.048 "num_base_bdevs_discovered": 1, 00:18:38.048 "num_base_bdevs_operational": 1, 00:18:38.048 "base_bdevs_list": [ 00:18:38.048 { 00:18:38.048 "name": null, 00:18:38.048 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:38.048 "is_configured": false, 00:18:38.048 "data_offset": 0, 00:18:38.048 "data_size": 7936 00:18:38.048 }, 00:18:38.048 { 00:18:38.048 "name": "BaseBdev2", 00:18:38.048 "uuid": "09032a40-0e90-58cd-8579-64c0ec2a0e6e", 00:18:38.048 "is_configured": true, 00:18:38.048 "data_offset": 256, 00:18:38.048 "data_size": 7936 00:18:38.048 } 00:18:38.048 ] 00:18:38.048 }' 00:18:38.048 19:16:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:38.049 19:16:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:38.049 19:16:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:38.049 19:16:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:38.049 19:16:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:18:38.049 19:16:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.049 19:16:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:38.049 19:16:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.049 19:16:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:38.049 19:16:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.049 19:16:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:38.049 [2024-11-27 19:16:47.573394] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:38.049 [2024-11-27 19:16:47.573449] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:38.049 [2024-11-27 19:16:47.573475] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:18:38.049 [2024-11-27 19:16:47.573494] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:38.049 [2024-11-27 19:16:47.573949] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:38.049 [2024-11-27 19:16:47.573978] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:38.049 [2024-11-27 19:16:47.574075] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:18:38.049 [2024-11-27 19:16:47.574088] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:38.049 [2024-11-27 19:16:47.574098] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:38.049 [2024-11-27 19:16:47.574107] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:18:38.049 BaseBdev1 00:18:38.049 19:16:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.049 19:16:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@775 -- # sleep 1 00:18:38.989 19:16:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:38.989 19:16:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:38.989 19:16:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:38.989 19:16:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:38.989 19:16:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:38.989 19:16:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:38.989 19:16:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:38.989 19:16:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:38.989 19:16:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:38.989 19:16:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:38.989 19:16:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:38.989 19:16:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:38.989 19:16:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.989 19:16:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:38.989 19:16:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.250 19:16:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:39.250 "name": "raid_bdev1", 00:18:39.250 "uuid": "60b027da-2a36-4728-8cba-0699c915e787", 00:18:39.250 "strip_size_kb": 0, 00:18:39.250 "state": "online", 00:18:39.250 "raid_level": "raid1", 00:18:39.250 "superblock": true, 00:18:39.250 "num_base_bdevs": 2, 00:18:39.250 "num_base_bdevs_discovered": 1, 00:18:39.250 "num_base_bdevs_operational": 1, 00:18:39.250 "base_bdevs_list": [ 00:18:39.250 { 00:18:39.250 "name": null, 00:18:39.250 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:39.250 "is_configured": false, 00:18:39.250 "data_offset": 0, 00:18:39.250 "data_size": 7936 00:18:39.250 }, 00:18:39.250 { 00:18:39.250 "name": "BaseBdev2", 00:18:39.250 "uuid": "09032a40-0e90-58cd-8579-64c0ec2a0e6e", 00:18:39.250 "is_configured": true, 00:18:39.250 "data_offset": 256, 00:18:39.250 "data_size": 7936 00:18:39.250 } 00:18:39.250 ] 00:18:39.250 }' 00:18:39.250 19:16:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:39.250 19:16:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:39.510 19:16:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:39.510 19:16:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:39.510 19:16:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:39.510 19:16:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:39.510 19:16:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:39.510 19:16:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:39.510 19:16:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:39.510 19:16:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.510 19:16:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:39.510 19:16:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.511 19:16:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:39.511 "name": "raid_bdev1", 00:18:39.511 "uuid": "60b027da-2a36-4728-8cba-0699c915e787", 00:18:39.511 "strip_size_kb": 0, 00:18:39.511 "state": "online", 00:18:39.511 "raid_level": "raid1", 00:18:39.511 "superblock": true, 00:18:39.511 "num_base_bdevs": 2, 00:18:39.511 "num_base_bdevs_discovered": 1, 00:18:39.511 "num_base_bdevs_operational": 1, 00:18:39.511 "base_bdevs_list": [ 00:18:39.511 { 00:18:39.511 "name": null, 00:18:39.511 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:39.511 "is_configured": false, 00:18:39.511 "data_offset": 0, 00:18:39.511 "data_size": 7936 00:18:39.511 }, 00:18:39.511 { 00:18:39.511 "name": "BaseBdev2", 00:18:39.511 "uuid": "09032a40-0e90-58cd-8579-64c0ec2a0e6e", 00:18:39.511 "is_configured": true, 00:18:39.511 "data_offset": 256, 00:18:39.511 "data_size": 7936 00:18:39.511 } 00:18:39.511 ] 00:18:39.511 }' 00:18:39.511 19:16:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:39.511 19:16:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:39.511 19:16:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:39.770 19:16:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:39.770 19:16:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:39.770 19:16:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@652 -- # local es=0 00:18:39.770 19:16:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:39.770 19:16:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:39.770 19:16:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:39.770 19:16:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:39.770 19:16:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:39.770 19:16:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:39.770 19:16:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.770 19:16:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:39.770 [2024-11-27 19:16:49.166675] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:39.770 [2024-11-27 19:16:49.166847] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:39.770 [2024-11-27 19:16:49.166881] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:39.770 request: 00:18:39.770 { 00:18:39.770 "base_bdev": "BaseBdev1", 00:18:39.770 "raid_bdev": "raid_bdev1", 00:18:39.771 "method": "bdev_raid_add_base_bdev", 00:18:39.771 "req_id": 1 00:18:39.771 } 00:18:39.771 Got JSON-RPC error response 00:18:39.771 response: 00:18:39.771 { 00:18:39.771 "code": -22, 00:18:39.771 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:18:39.771 } 00:18:39.771 19:16:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:39.771 19:16:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@655 -- # es=1 00:18:39.771 19:16:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:39.771 19:16:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:39.771 19:16:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:39.771 19:16:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@779 -- # sleep 1 00:18:40.710 19:16:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:40.710 19:16:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:40.710 19:16:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:40.710 19:16:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:40.710 19:16:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:40.710 19:16:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:40.710 19:16:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:40.710 19:16:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:40.710 19:16:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:40.710 19:16:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:40.710 19:16:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:40.710 19:16:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:40.710 19:16:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.710 19:16:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:40.710 19:16:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.710 19:16:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:40.710 "name": "raid_bdev1", 00:18:40.710 "uuid": "60b027da-2a36-4728-8cba-0699c915e787", 00:18:40.710 "strip_size_kb": 0, 00:18:40.710 "state": "online", 00:18:40.710 "raid_level": "raid1", 00:18:40.710 "superblock": true, 00:18:40.710 "num_base_bdevs": 2, 00:18:40.710 "num_base_bdevs_discovered": 1, 00:18:40.710 "num_base_bdevs_operational": 1, 00:18:40.710 "base_bdevs_list": [ 00:18:40.710 { 00:18:40.710 "name": null, 00:18:40.710 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:40.710 "is_configured": false, 00:18:40.710 "data_offset": 0, 00:18:40.710 "data_size": 7936 00:18:40.710 }, 00:18:40.710 { 00:18:40.710 "name": "BaseBdev2", 00:18:40.710 "uuid": "09032a40-0e90-58cd-8579-64c0ec2a0e6e", 00:18:40.710 "is_configured": true, 00:18:40.710 "data_offset": 256, 00:18:40.710 "data_size": 7936 00:18:40.710 } 00:18:40.710 ] 00:18:40.710 }' 00:18:40.711 19:16:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:40.711 19:16:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:41.280 19:16:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:41.280 19:16:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:41.280 19:16:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:41.280 19:16:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:41.280 19:16:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:41.280 19:16:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:41.280 19:16:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.280 19:16:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:41.280 19:16:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:41.280 19:16:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.280 19:16:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:41.280 "name": "raid_bdev1", 00:18:41.280 "uuid": "60b027da-2a36-4728-8cba-0699c915e787", 00:18:41.280 "strip_size_kb": 0, 00:18:41.280 "state": "online", 00:18:41.280 "raid_level": "raid1", 00:18:41.280 "superblock": true, 00:18:41.280 "num_base_bdevs": 2, 00:18:41.280 "num_base_bdevs_discovered": 1, 00:18:41.280 "num_base_bdevs_operational": 1, 00:18:41.280 "base_bdevs_list": [ 00:18:41.280 { 00:18:41.280 "name": null, 00:18:41.280 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:41.280 "is_configured": false, 00:18:41.280 "data_offset": 0, 00:18:41.280 "data_size": 7936 00:18:41.280 }, 00:18:41.280 { 00:18:41.280 "name": "BaseBdev2", 00:18:41.280 "uuid": "09032a40-0e90-58cd-8579-64c0ec2a0e6e", 00:18:41.280 "is_configured": true, 00:18:41.280 "data_offset": 256, 00:18:41.280 "data_size": 7936 00:18:41.280 } 00:18:41.280 ] 00:18:41.280 }' 00:18:41.280 19:16:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:41.280 19:16:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:41.280 19:16:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:41.280 19:16:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:41.280 19:16:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@784 -- # killprocess 86571 00:18:41.280 19:16:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@954 -- # '[' -z 86571 ']' 00:18:41.280 19:16:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@958 -- # kill -0 86571 00:18:41.280 19:16:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@959 -- # uname 00:18:41.280 19:16:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:41.280 19:16:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86571 00:18:41.280 19:16:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:41.280 killing process with pid 86571 00:18:41.280 Received shutdown signal, test time was about 60.000000 seconds 00:18:41.280 00:18:41.280 Latency(us) 00:18:41.280 [2024-11-27T19:16:50.916Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:41.280 [2024-11-27T19:16:50.916Z] =================================================================================================================== 00:18:41.280 [2024-11-27T19:16:50.916Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:41.280 19:16:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:41.280 19:16:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86571' 00:18:41.280 19:16:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@973 -- # kill 86571 00:18:41.280 [2024-11-27 19:16:50.826436] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:41.280 [2024-11-27 19:16:50.826551] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:41.280 [2024-11-27 19:16:50.826600] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:41.280 19:16:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@978 -- # wait 86571 00:18:41.280 [2024-11-27 19:16:50.826610] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:18:41.540 [2024-11-27 19:16:51.106239] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:42.922 ************************************ 00:18:42.922 END TEST raid_rebuild_test_sb_4k 00:18:42.922 19:16:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@786 -- # return 0 00:18:42.922 00:18:42.922 real 0m19.775s 00:18:42.922 user 0m25.851s 00:18:42.922 sys 0m2.729s 00:18:42.922 19:16:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:42.922 19:16:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:42.922 ************************************ 00:18:42.922 19:16:52 bdev_raid -- bdev/bdev_raid.sh@1003 -- # base_malloc_params='-m 32' 00:18:42.922 19:16:52 bdev_raid -- bdev/bdev_raid.sh@1004 -- # run_test raid_state_function_test_sb_md_separate raid_state_function_test raid1 2 true 00:18:42.922 19:16:52 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:18:42.922 19:16:52 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:42.922 19:16:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:42.922 ************************************ 00:18:42.922 START TEST raid_state_function_test_sb_md_separate 00:18:42.922 ************************************ 00:18:42.922 19:16:52 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:18:42.922 19:16:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:18:42.922 19:16:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:18:42.922 19:16:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:18:42.922 19:16:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:18:42.922 19:16:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:18:42.922 19:16:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:42.922 19:16:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:18:42.922 19:16:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:42.922 19:16:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:42.922 19:16:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:18:42.922 19:16:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:42.922 19:16:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:42.922 19:16:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:42.922 19:16:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:18:42.922 19:16:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:18:42.922 19:16:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # local strip_size 00:18:42.922 19:16:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:18:42.922 19:16:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:18:42.922 19:16:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:18:42.922 19:16:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:18:42.922 19:16:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:18:42.922 19:16:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:18:42.922 19:16:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@229 -- # raid_pid=87263 00:18:42.922 19:16:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:18:42.922 19:16:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 87263' 00:18:42.922 Process raid pid: 87263 00:18:42.922 19:16:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@231 -- # waitforlisten 87263 00:18:42.922 19:16:52 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@835 -- # '[' -z 87263 ']' 00:18:42.922 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:42.922 19:16:52 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:42.922 19:16:52 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:42.922 19:16:52 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:42.922 19:16:52 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:42.922 19:16:52 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:42.922 [2024-11-27 19:16:52.322920] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:18:42.922 [2024-11-27 19:16:52.323131] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:42.922 [2024-11-27 19:16:52.500365] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:43.183 [2024-11-27 19:16:52.601530] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:43.183 [2024-11-27 19:16:52.806483] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:43.183 [2024-11-27 19:16:52.806513] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:43.754 19:16:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:43.754 19:16:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@868 -- # return 0 00:18:43.754 19:16:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:43.754 19:16:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.754 19:16:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:43.754 [2024-11-27 19:16:53.154044] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:43.754 [2024-11-27 19:16:53.154099] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:43.754 [2024-11-27 19:16:53.154109] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:43.755 [2024-11-27 19:16:53.154119] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:43.755 19:16:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.755 19:16:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:43.755 19:16:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:43.755 19:16:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:43.755 19:16:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:43.755 19:16:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:43.755 19:16:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:43.755 19:16:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:43.755 19:16:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:43.755 19:16:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:43.755 19:16:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:43.755 19:16:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:43.755 19:16:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:43.755 19:16:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.755 19:16:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:43.755 19:16:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.755 19:16:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:43.755 "name": "Existed_Raid", 00:18:43.755 "uuid": "5ce7601b-4a44-4ff3-9dd3-bb275431bab4", 00:18:43.755 "strip_size_kb": 0, 00:18:43.755 "state": "configuring", 00:18:43.755 "raid_level": "raid1", 00:18:43.755 "superblock": true, 00:18:43.755 "num_base_bdevs": 2, 00:18:43.755 "num_base_bdevs_discovered": 0, 00:18:43.755 "num_base_bdevs_operational": 2, 00:18:43.755 "base_bdevs_list": [ 00:18:43.755 { 00:18:43.755 "name": "BaseBdev1", 00:18:43.755 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:43.755 "is_configured": false, 00:18:43.755 "data_offset": 0, 00:18:43.755 "data_size": 0 00:18:43.755 }, 00:18:43.755 { 00:18:43.755 "name": "BaseBdev2", 00:18:43.755 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:43.755 "is_configured": false, 00:18:43.755 "data_offset": 0, 00:18:43.755 "data_size": 0 00:18:43.755 } 00:18:43.755 ] 00:18:43.755 }' 00:18:43.755 19:16:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:43.755 19:16:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:44.015 19:16:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:44.015 19:16:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.015 19:16:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:44.276 [2024-11-27 19:16:53.653130] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:44.276 [2024-11-27 19:16:53.653206] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:18:44.276 19:16:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.276 19:16:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:44.276 19:16:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.276 19:16:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:44.276 [2024-11-27 19:16:53.665118] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:44.276 [2024-11-27 19:16:53.665195] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:44.276 [2024-11-27 19:16:53.665237] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:44.276 [2024-11-27 19:16:53.665260] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:44.276 19:16:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.276 19:16:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1 00:18:44.276 19:16:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.276 19:16:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:44.276 [2024-11-27 19:16:53.712009] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:44.276 BaseBdev1 00:18:44.276 19:16:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.276 19:16:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:18:44.276 19:16:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:18:44.276 19:16:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:44.276 19:16:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # local i 00:18:44.276 19:16:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:44.276 19:16:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:44.276 19:16:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:44.276 19:16:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.276 19:16:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:44.276 19:16:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.276 19:16:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:44.276 19:16:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.276 19:16:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:44.276 [ 00:18:44.276 { 00:18:44.276 "name": "BaseBdev1", 00:18:44.276 "aliases": [ 00:18:44.277 "11d8ab3b-27d4-432e-84ca-b73b7b0f6ec1" 00:18:44.277 ], 00:18:44.277 "product_name": "Malloc disk", 00:18:44.277 "block_size": 4096, 00:18:44.277 "num_blocks": 8192, 00:18:44.277 "uuid": "11d8ab3b-27d4-432e-84ca-b73b7b0f6ec1", 00:18:44.277 "md_size": 32, 00:18:44.277 "md_interleave": false, 00:18:44.277 "dif_type": 0, 00:18:44.277 "assigned_rate_limits": { 00:18:44.277 "rw_ios_per_sec": 0, 00:18:44.277 "rw_mbytes_per_sec": 0, 00:18:44.277 "r_mbytes_per_sec": 0, 00:18:44.277 "w_mbytes_per_sec": 0 00:18:44.277 }, 00:18:44.277 "claimed": true, 00:18:44.277 "claim_type": "exclusive_write", 00:18:44.277 "zoned": false, 00:18:44.277 "supported_io_types": { 00:18:44.277 "read": true, 00:18:44.277 "write": true, 00:18:44.277 "unmap": true, 00:18:44.277 "flush": true, 00:18:44.277 "reset": true, 00:18:44.277 "nvme_admin": false, 00:18:44.277 "nvme_io": false, 00:18:44.277 "nvme_io_md": false, 00:18:44.277 "write_zeroes": true, 00:18:44.277 "zcopy": true, 00:18:44.277 "get_zone_info": false, 00:18:44.277 "zone_management": false, 00:18:44.277 "zone_append": false, 00:18:44.277 "compare": false, 00:18:44.277 "compare_and_write": false, 00:18:44.277 "abort": true, 00:18:44.277 "seek_hole": false, 00:18:44.277 "seek_data": false, 00:18:44.277 "copy": true, 00:18:44.277 "nvme_iov_md": false 00:18:44.277 }, 00:18:44.277 "memory_domains": [ 00:18:44.277 { 00:18:44.277 "dma_device_id": "system", 00:18:44.277 "dma_device_type": 1 00:18:44.277 }, 00:18:44.277 { 00:18:44.277 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:44.277 "dma_device_type": 2 00:18:44.277 } 00:18:44.277 ], 00:18:44.277 "driver_specific": {} 00:18:44.277 } 00:18:44.277 ] 00:18:44.277 19:16:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.277 19:16:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@911 -- # return 0 00:18:44.277 19:16:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:44.277 19:16:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:44.277 19:16:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:44.277 19:16:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:44.277 19:16:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:44.277 19:16:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:44.277 19:16:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:44.277 19:16:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:44.277 19:16:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:44.277 19:16:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:44.277 19:16:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:44.277 19:16:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:44.277 19:16:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.277 19:16:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:44.277 19:16:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.277 19:16:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:44.277 "name": "Existed_Raid", 00:18:44.277 "uuid": "8cd22a6b-946b-41bf-bbec-80c9509da251", 00:18:44.277 "strip_size_kb": 0, 00:18:44.277 "state": "configuring", 00:18:44.277 "raid_level": "raid1", 00:18:44.277 "superblock": true, 00:18:44.277 "num_base_bdevs": 2, 00:18:44.277 "num_base_bdevs_discovered": 1, 00:18:44.277 "num_base_bdevs_operational": 2, 00:18:44.277 "base_bdevs_list": [ 00:18:44.277 { 00:18:44.277 "name": "BaseBdev1", 00:18:44.277 "uuid": "11d8ab3b-27d4-432e-84ca-b73b7b0f6ec1", 00:18:44.277 "is_configured": true, 00:18:44.277 "data_offset": 256, 00:18:44.277 "data_size": 7936 00:18:44.277 }, 00:18:44.277 { 00:18:44.277 "name": "BaseBdev2", 00:18:44.277 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:44.277 "is_configured": false, 00:18:44.277 "data_offset": 0, 00:18:44.277 "data_size": 0 00:18:44.277 } 00:18:44.277 ] 00:18:44.277 }' 00:18:44.277 19:16:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:44.277 19:16:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:44.848 19:16:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:44.848 19:16:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.848 19:16:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:44.848 [2024-11-27 19:16:54.191269] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:44.848 [2024-11-27 19:16:54.191319] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:18:44.848 19:16:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.848 19:16:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:44.848 19:16:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.848 19:16:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:44.848 [2024-11-27 19:16:54.203280] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:44.848 [2024-11-27 19:16:54.205093] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:44.848 [2024-11-27 19:16:54.205138] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:44.848 19:16:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.848 19:16:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:18:44.848 19:16:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:44.848 19:16:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:44.848 19:16:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:44.848 19:16:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:44.848 19:16:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:44.848 19:16:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:44.848 19:16:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:44.848 19:16:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:44.848 19:16:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:44.848 19:16:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:44.848 19:16:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:44.848 19:16:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:44.848 19:16:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:44.848 19:16:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.848 19:16:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:44.848 19:16:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.848 19:16:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:44.848 "name": "Existed_Raid", 00:18:44.848 "uuid": "2d81e905-6bab-437e-bb1f-7b60e5a57730", 00:18:44.848 "strip_size_kb": 0, 00:18:44.848 "state": "configuring", 00:18:44.848 "raid_level": "raid1", 00:18:44.848 "superblock": true, 00:18:44.848 "num_base_bdevs": 2, 00:18:44.848 "num_base_bdevs_discovered": 1, 00:18:44.848 "num_base_bdevs_operational": 2, 00:18:44.848 "base_bdevs_list": [ 00:18:44.848 { 00:18:44.848 "name": "BaseBdev1", 00:18:44.848 "uuid": "11d8ab3b-27d4-432e-84ca-b73b7b0f6ec1", 00:18:44.848 "is_configured": true, 00:18:44.848 "data_offset": 256, 00:18:44.848 "data_size": 7936 00:18:44.848 }, 00:18:44.848 { 00:18:44.848 "name": "BaseBdev2", 00:18:44.848 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:44.848 "is_configured": false, 00:18:44.848 "data_offset": 0, 00:18:44.848 "data_size": 0 00:18:44.848 } 00:18:44.848 ] 00:18:44.848 }' 00:18:44.848 19:16:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:44.848 19:16:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:45.108 19:16:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2 00:18:45.108 19:16:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.108 19:16:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:45.108 [2024-11-27 19:16:54.733968] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:45.108 [2024-11-27 19:16:54.734294] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:45.108 [2024-11-27 19:16:54.734354] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:45.108 [2024-11-27 19:16:54.734461] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:18:45.108 [2024-11-27 19:16:54.734621] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:45.108 [2024-11-27 19:16:54.734665] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:18:45.108 BaseBdev2 00:18:45.108 [2024-11-27 19:16:54.734820] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:45.108 19:16:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.108 19:16:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:18:45.108 19:16:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:18:45.108 19:16:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:45.108 19:16:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # local i 00:18:45.108 19:16:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:45.108 19:16:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:45.108 19:16:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:45.108 19:16:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.108 19:16:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:45.368 19:16:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.368 19:16:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:45.368 19:16:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.368 19:16:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:45.368 [ 00:18:45.368 { 00:18:45.368 "name": "BaseBdev2", 00:18:45.368 "aliases": [ 00:18:45.368 "3b998780-b14e-4534-a94e-cf6477587d1d" 00:18:45.368 ], 00:18:45.368 "product_name": "Malloc disk", 00:18:45.368 "block_size": 4096, 00:18:45.368 "num_blocks": 8192, 00:18:45.368 "uuid": "3b998780-b14e-4534-a94e-cf6477587d1d", 00:18:45.368 "md_size": 32, 00:18:45.368 "md_interleave": false, 00:18:45.368 "dif_type": 0, 00:18:45.368 "assigned_rate_limits": { 00:18:45.368 "rw_ios_per_sec": 0, 00:18:45.368 "rw_mbytes_per_sec": 0, 00:18:45.368 "r_mbytes_per_sec": 0, 00:18:45.368 "w_mbytes_per_sec": 0 00:18:45.368 }, 00:18:45.368 "claimed": true, 00:18:45.368 "claim_type": "exclusive_write", 00:18:45.368 "zoned": false, 00:18:45.368 "supported_io_types": { 00:18:45.368 "read": true, 00:18:45.368 "write": true, 00:18:45.368 "unmap": true, 00:18:45.368 "flush": true, 00:18:45.368 "reset": true, 00:18:45.368 "nvme_admin": false, 00:18:45.368 "nvme_io": false, 00:18:45.368 "nvme_io_md": false, 00:18:45.368 "write_zeroes": true, 00:18:45.368 "zcopy": true, 00:18:45.368 "get_zone_info": false, 00:18:45.368 "zone_management": false, 00:18:45.368 "zone_append": false, 00:18:45.368 "compare": false, 00:18:45.368 "compare_and_write": false, 00:18:45.368 "abort": true, 00:18:45.368 "seek_hole": false, 00:18:45.368 "seek_data": false, 00:18:45.368 "copy": true, 00:18:45.368 "nvme_iov_md": false 00:18:45.368 }, 00:18:45.368 "memory_domains": [ 00:18:45.368 { 00:18:45.368 "dma_device_id": "system", 00:18:45.368 "dma_device_type": 1 00:18:45.368 }, 00:18:45.368 { 00:18:45.368 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:45.368 "dma_device_type": 2 00:18:45.368 } 00:18:45.368 ], 00:18:45.368 "driver_specific": {} 00:18:45.368 } 00:18:45.368 ] 00:18:45.368 19:16:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.368 19:16:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@911 -- # return 0 00:18:45.368 19:16:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:45.368 19:16:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:45.368 19:16:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:18:45.368 19:16:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:45.368 19:16:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:45.368 19:16:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:45.368 19:16:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:45.368 19:16:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:45.368 19:16:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:45.368 19:16:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:45.368 19:16:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:45.368 19:16:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:45.368 19:16:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:45.368 19:16:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:45.368 19:16:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.368 19:16:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:45.368 19:16:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.368 19:16:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:45.368 "name": "Existed_Raid", 00:18:45.368 "uuid": "2d81e905-6bab-437e-bb1f-7b60e5a57730", 00:18:45.368 "strip_size_kb": 0, 00:18:45.368 "state": "online", 00:18:45.368 "raid_level": "raid1", 00:18:45.368 "superblock": true, 00:18:45.368 "num_base_bdevs": 2, 00:18:45.368 "num_base_bdevs_discovered": 2, 00:18:45.368 "num_base_bdevs_operational": 2, 00:18:45.368 "base_bdevs_list": [ 00:18:45.368 { 00:18:45.368 "name": "BaseBdev1", 00:18:45.368 "uuid": "11d8ab3b-27d4-432e-84ca-b73b7b0f6ec1", 00:18:45.368 "is_configured": true, 00:18:45.368 "data_offset": 256, 00:18:45.368 "data_size": 7936 00:18:45.368 }, 00:18:45.368 { 00:18:45.368 "name": "BaseBdev2", 00:18:45.368 "uuid": "3b998780-b14e-4534-a94e-cf6477587d1d", 00:18:45.368 "is_configured": true, 00:18:45.368 "data_offset": 256, 00:18:45.368 "data_size": 7936 00:18:45.368 } 00:18:45.368 ] 00:18:45.368 }' 00:18:45.368 19:16:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:45.368 19:16:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:45.628 19:16:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:18:45.628 19:16:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:18:45.628 19:16:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:45.628 19:16:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:45.628 19:16:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:18:45.628 19:16:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:45.628 19:16:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:18:45.628 19:16:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:45.628 19:16:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.628 19:16:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:45.628 [2024-11-27 19:16:55.221466] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:45.628 19:16:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.628 19:16:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:45.628 "name": "Existed_Raid", 00:18:45.628 "aliases": [ 00:18:45.628 "2d81e905-6bab-437e-bb1f-7b60e5a57730" 00:18:45.628 ], 00:18:45.628 "product_name": "Raid Volume", 00:18:45.628 "block_size": 4096, 00:18:45.628 "num_blocks": 7936, 00:18:45.628 "uuid": "2d81e905-6bab-437e-bb1f-7b60e5a57730", 00:18:45.628 "md_size": 32, 00:18:45.628 "md_interleave": false, 00:18:45.628 "dif_type": 0, 00:18:45.628 "assigned_rate_limits": { 00:18:45.628 "rw_ios_per_sec": 0, 00:18:45.628 "rw_mbytes_per_sec": 0, 00:18:45.628 "r_mbytes_per_sec": 0, 00:18:45.628 "w_mbytes_per_sec": 0 00:18:45.628 }, 00:18:45.628 "claimed": false, 00:18:45.628 "zoned": false, 00:18:45.628 "supported_io_types": { 00:18:45.628 "read": true, 00:18:45.628 "write": true, 00:18:45.628 "unmap": false, 00:18:45.628 "flush": false, 00:18:45.628 "reset": true, 00:18:45.628 "nvme_admin": false, 00:18:45.628 "nvme_io": false, 00:18:45.628 "nvme_io_md": false, 00:18:45.628 "write_zeroes": true, 00:18:45.628 "zcopy": false, 00:18:45.628 "get_zone_info": false, 00:18:45.628 "zone_management": false, 00:18:45.628 "zone_append": false, 00:18:45.628 "compare": false, 00:18:45.628 "compare_and_write": false, 00:18:45.628 "abort": false, 00:18:45.628 "seek_hole": false, 00:18:45.628 "seek_data": false, 00:18:45.628 "copy": false, 00:18:45.628 "nvme_iov_md": false 00:18:45.628 }, 00:18:45.628 "memory_domains": [ 00:18:45.628 { 00:18:45.628 "dma_device_id": "system", 00:18:45.628 "dma_device_type": 1 00:18:45.628 }, 00:18:45.628 { 00:18:45.628 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:45.628 "dma_device_type": 2 00:18:45.628 }, 00:18:45.628 { 00:18:45.628 "dma_device_id": "system", 00:18:45.628 "dma_device_type": 1 00:18:45.628 }, 00:18:45.629 { 00:18:45.629 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:45.629 "dma_device_type": 2 00:18:45.629 } 00:18:45.629 ], 00:18:45.629 "driver_specific": { 00:18:45.629 "raid": { 00:18:45.629 "uuid": "2d81e905-6bab-437e-bb1f-7b60e5a57730", 00:18:45.629 "strip_size_kb": 0, 00:18:45.629 "state": "online", 00:18:45.629 "raid_level": "raid1", 00:18:45.629 "superblock": true, 00:18:45.629 "num_base_bdevs": 2, 00:18:45.629 "num_base_bdevs_discovered": 2, 00:18:45.629 "num_base_bdevs_operational": 2, 00:18:45.629 "base_bdevs_list": [ 00:18:45.629 { 00:18:45.629 "name": "BaseBdev1", 00:18:45.629 "uuid": "11d8ab3b-27d4-432e-84ca-b73b7b0f6ec1", 00:18:45.629 "is_configured": true, 00:18:45.629 "data_offset": 256, 00:18:45.629 "data_size": 7936 00:18:45.629 }, 00:18:45.629 { 00:18:45.629 "name": "BaseBdev2", 00:18:45.629 "uuid": "3b998780-b14e-4534-a94e-cf6477587d1d", 00:18:45.629 "is_configured": true, 00:18:45.629 "data_offset": 256, 00:18:45.629 "data_size": 7936 00:18:45.629 } 00:18:45.629 ] 00:18:45.629 } 00:18:45.629 } 00:18:45.629 }' 00:18:45.629 19:16:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:45.889 19:16:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:18:45.889 BaseBdev2' 00:18:45.889 19:16:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:45.889 19:16:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:18:45.889 19:16:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:45.889 19:16:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:18:45.889 19:16:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.889 19:16:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:45.889 19:16:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:45.889 19:16:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.889 19:16:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:18:45.889 19:16:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:18:45.889 19:16:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:45.889 19:16:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:45.889 19:16:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:18:45.889 19:16:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.889 19:16:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:45.889 19:16:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.889 19:16:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:18:45.889 19:16:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:18:45.889 19:16:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:18:45.889 19:16:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.889 19:16:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:45.889 [2024-11-27 19:16:55.464817] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:46.149 19:16:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.149 19:16:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@260 -- # local expected_state 00:18:46.149 19:16:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:18:46.149 19:16:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:46.149 19:16:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:18:46.149 19:16:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:18:46.149 19:16:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:18:46.149 19:16:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:46.149 19:16:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:46.149 19:16:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:46.149 19:16:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:46.149 19:16:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:46.149 19:16:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:46.149 19:16:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:46.149 19:16:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:46.149 19:16:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:46.149 19:16:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:46.149 19:16:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:46.149 19:16:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.149 19:16:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:46.149 19:16:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.149 19:16:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:46.149 "name": "Existed_Raid", 00:18:46.149 "uuid": "2d81e905-6bab-437e-bb1f-7b60e5a57730", 00:18:46.150 "strip_size_kb": 0, 00:18:46.150 "state": "online", 00:18:46.150 "raid_level": "raid1", 00:18:46.150 "superblock": true, 00:18:46.150 "num_base_bdevs": 2, 00:18:46.150 "num_base_bdevs_discovered": 1, 00:18:46.150 "num_base_bdevs_operational": 1, 00:18:46.150 "base_bdevs_list": [ 00:18:46.150 { 00:18:46.150 "name": null, 00:18:46.150 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:46.150 "is_configured": false, 00:18:46.150 "data_offset": 0, 00:18:46.150 "data_size": 7936 00:18:46.150 }, 00:18:46.150 { 00:18:46.150 "name": "BaseBdev2", 00:18:46.150 "uuid": "3b998780-b14e-4534-a94e-cf6477587d1d", 00:18:46.150 "is_configured": true, 00:18:46.150 "data_offset": 256, 00:18:46.150 "data_size": 7936 00:18:46.150 } 00:18:46.150 ] 00:18:46.150 }' 00:18:46.150 19:16:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:46.150 19:16:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:46.410 19:16:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:18:46.410 19:16:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:46.410 19:16:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:46.410 19:16:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.410 19:16:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:46.410 19:16:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:46.410 19:16:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.410 19:16:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:46.410 19:16:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:46.410 19:16:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:18:46.410 19:16:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.410 19:16:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:46.410 [2024-11-27 19:16:55.993854] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:46.410 [2024-11-27 19:16:55.993953] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:46.676 [2024-11-27 19:16:56.089950] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:46.676 [2024-11-27 19:16:56.089999] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:46.676 [2024-11-27 19:16:56.090011] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:18:46.676 19:16:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.676 19:16:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:46.676 19:16:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:46.676 19:16:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:46.676 19:16:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:18:46.676 19:16:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.676 19:16:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:46.676 19:16:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.676 19:16:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:18:46.676 19:16:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:18:46.676 19:16:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:18:46.676 19:16:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@326 -- # killprocess 87263 00:18:46.676 19:16:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@954 -- # '[' -z 87263 ']' 00:18:46.676 19:16:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@958 -- # kill -0 87263 00:18:46.676 19:16:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@959 -- # uname 00:18:46.676 19:16:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:46.676 19:16:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87263 00:18:46.676 19:16:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:46.676 killing process with pid 87263 00:18:46.676 19:16:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:46.676 19:16:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87263' 00:18:46.676 19:16:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@973 -- # kill 87263 00:18:46.676 [2024-11-27 19:16:56.184150] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:46.676 19:16:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@978 -- # wait 87263 00:18:46.676 [2024-11-27 19:16:56.200242] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:48.068 ************************************ 00:18:48.068 END TEST raid_state_function_test_sb_md_separate 00:18:48.068 ************************************ 00:18:48.068 19:16:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@328 -- # return 0 00:18:48.068 00:18:48.068 real 0m5.051s 00:18:48.068 user 0m7.244s 00:18:48.068 sys 0m0.915s 00:18:48.068 19:16:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:48.068 19:16:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:48.068 19:16:57 bdev_raid -- bdev/bdev_raid.sh@1005 -- # run_test raid_superblock_test_md_separate raid_superblock_test raid1 2 00:18:48.068 19:16:57 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:18:48.068 19:16:57 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:48.068 19:16:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:48.068 ************************************ 00:18:48.068 START TEST raid_superblock_test_md_separate 00:18:48.068 ************************************ 00:18:48.068 19:16:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:18:48.068 19:16:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:18:48.068 19:16:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:18:48.068 19:16:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:18:48.068 19:16:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:18:48.068 19:16:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:18:48.068 19:16:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:18:48.068 19:16:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:18:48.068 19:16:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:18:48.068 19:16:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:18:48.068 19:16:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@399 -- # local strip_size 00:18:48.068 19:16:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:18:48.068 19:16:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:18:48.068 19:16:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:18:48.068 19:16:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:18:48.068 19:16:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:18:48.069 19:16:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # raid_pid=87514 00:18:48.069 19:16:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:18:48.069 19:16:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@413 -- # waitforlisten 87514 00:18:48.069 19:16:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@835 -- # '[' -z 87514 ']' 00:18:48.069 19:16:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:48.069 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:48.069 19:16:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:48.069 19:16:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:48.069 19:16:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:48.069 19:16:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:48.069 [2024-11-27 19:16:57.439758] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:18:48.069 [2024-11-27 19:16:57.439877] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87514 ] 00:18:48.069 [2024-11-27 19:16:57.614175] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:48.330 [2024-11-27 19:16:57.714562] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:48.330 [2024-11-27 19:16:57.895003] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:48.330 [2024-11-27 19:16:57.895058] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:48.900 19:16:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:48.900 19:16:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@868 -- # return 0 00:18:48.900 19:16:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:18:48.900 19:16:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:48.900 19:16:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:18:48.900 19:16:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:18:48.900 19:16:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:18:48.900 19:16:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:48.900 19:16:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:48.900 19:16:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:48.900 19:16:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc1 00:18:48.900 19:16:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.900 19:16:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:48.900 malloc1 00:18:48.900 19:16:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.900 19:16:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:48.900 19:16:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.901 19:16:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:48.901 [2024-11-27 19:16:58.296839] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:48.901 [2024-11-27 19:16:58.296976] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:48.901 [2024-11-27 19:16:58.297016] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:48.901 [2024-11-27 19:16:58.297046] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:48.901 [2024-11-27 19:16:58.298920] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:48.901 [2024-11-27 19:16:58.299004] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:48.901 pt1 00:18:48.901 19:16:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.901 19:16:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:48.901 19:16:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:48.901 19:16:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:18:48.901 19:16:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:18:48.901 19:16:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:18:48.901 19:16:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:48.901 19:16:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:48.901 19:16:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:48.901 19:16:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc2 00:18:48.901 19:16:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.901 19:16:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:48.901 malloc2 00:18:48.901 19:16:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.901 19:16:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:48.901 19:16:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.901 19:16:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:48.901 [2024-11-27 19:16:58.356905] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:48.901 [2024-11-27 19:16:58.357018] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:48.901 [2024-11-27 19:16:58.357053] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:48.901 [2024-11-27 19:16:58.357079] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:48.901 [2024-11-27 19:16:58.358872] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:48.901 [2024-11-27 19:16:58.358937] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:48.901 pt2 00:18:48.901 19:16:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.901 19:16:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:48.901 19:16:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:48.901 19:16:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:18:48.901 19:16:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.901 19:16:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:48.901 [2024-11-27 19:16:58.368915] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:48.901 [2024-11-27 19:16:58.370564] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:48.901 [2024-11-27 19:16:58.370780] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:48.901 [2024-11-27 19:16:58.370811] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:48.901 [2024-11-27 19:16:58.370918] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:18:48.901 [2024-11-27 19:16:58.371057] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:48.901 [2024-11-27 19:16:58.371105] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:48.901 [2024-11-27 19:16:58.371260] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:48.901 19:16:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.901 19:16:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:48.901 19:16:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:48.901 19:16:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:48.901 19:16:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:48.901 19:16:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:48.901 19:16:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:48.901 19:16:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:48.901 19:16:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:48.901 19:16:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:48.901 19:16:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:48.901 19:16:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:48.901 19:16:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:48.901 19:16:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.901 19:16:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:48.901 19:16:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.901 19:16:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:48.901 "name": "raid_bdev1", 00:18:48.901 "uuid": "b24a8fba-1a70-4e90-bf00-f4167ad14f76", 00:18:48.901 "strip_size_kb": 0, 00:18:48.901 "state": "online", 00:18:48.901 "raid_level": "raid1", 00:18:48.901 "superblock": true, 00:18:48.901 "num_base_bdevs": 2, 00:18:48.901 "num_base_bdevs_discovered": 2, 00:18:48.901 "num_base_bdevs_operational": 2, 00:18:48.901 "base_bdevs_list": [ 00:18:48.901 { 00:18:48.901 "name": "pt1", 00:18:48.901 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:48.901 "is_configured": true, 00:18:48.901 "data_offset": 256, 00:18:48.901 "data_size": 7936 00:18:48.901 }, 00:18:48.901 { 00:18:48.901 "name": "pt2", 00:18:48.901 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:48.901 "is_configured": true, 00:18:48.901 "data_offset": 256, 00:18:48.901 "data_size": 7936 00:18:48.901 } 00:18:48.901 ] 00:18:48.901 }' 00:18:48.901 19:16:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:48.901 19:16:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:49.473 19:16:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:18:49.473 19:16:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:49.473 19:16:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:49.473 19:16:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:49.473 19:16:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:18:49.473 19:16:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:49.473 19:16:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:49.473 19:16:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:49.473 19:16:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.473 19:16:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:49.473 [2024-11-27 19:16:58.840321] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:49.473 19:16:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.473 19:16:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:49.473 "name": "raid_bdev1", 00:18:49.473 "aliases": [ 00:18:49.473 "b24a8fba-1a70-4e90-bf00-f4167ad14f76" 00:18:49.473 ], 00:18:49.473 "product_name": "Raid Volume", 00:18:49.473 "block_size": 4096, 00:18:49.473 "num_blocks": 7936, 00:18:49.473 "uuid": "b24a8fba-1a70-4e90-bf00-f4167ad14f76", 00:18:49.473 "md_size": 32, 00:18:49.473 "md_interleave": false, 00:18:49.473 "dif_type": 0, 00:18:49.473 "assigned_rate_limits": { 00:18:49.473 "rw_ios_per_sec": 0, 00:18:49.473 "rw_mbytes_per_sec": 0, 00:18:49.473 "r_mbytes_per_sec": 0, 00:18:49.473 "w_mbytes_per_sec": 0 00:18:49.473 }, 00:18:49.473 "claimed": false, 00:18:49.473 "zoned": false, 00:18:49.473 "supported_io_types": { 00:18:49.473 "read": true, 00:18:49.473 "write": true, 00:18:49.473 "unmap": false, 00:18:49.473 "flush": false, 00:18:49.473 "reset": true, 00:18:49.473 "nvme_admin": false, 00:18:49.473 "nvme_io": false, 00:18:49.473 "nvme_io_md": false, 00:18:49.473 "write_zeroes": true, 00:18:49.473 "zcopy": false, 00:18:49.473 "get_zone_info": false, 00:18:49.473 "zone_management": false, 00:18:49.473 "zone_append": false, 00:18:49.473 "compare": false, 00:18:49.473 "compare_and_write": false, 00:18:49.473 "abort": false, 00:18:49.473 "seek_hole": false, 00:18:49.473 "seek_data": false, 00:18:49.473 "copy": false, 00:18:49.473 "nvme_iov_md": false 00:18:49.473 }, 00:18:49.473 "memory_domains": [ 00:18:49.473 { 00:18:49.473 "dma_device_id": "system", 00:18:49.473 "dma_device_type": 1 00:18:49.473 }, 00:18:49.473 { 00:18:49.473 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:49.473 "dma_device_type": 2 00:18:49.473 }, 00:18:49.473 { 00:18:49.473 "dma_device_id": "system", 00:18:49.473 "dma_device_type": 1 00:18:49.473 }, 00:18:49.473 { 00:18:49.473 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:49.473 "dma_device_type": 2 00:18:49.473 } 00:18:49.473 ], 00:18:49.473 "driver_specific": { 00:18:49.473 "raid": { 00:18:49.473 "uuid": "b24a8fba-1a70-4e90-bf00-f4167ad14f76", 00:18:49.473 "strip_size_kb": 0, 00:18:49.473 "state": "online", 00:18:49.473 "raid_level": "raid1", 00:18:49.473 "superblock": true, 00:18:49.473 "num_base_bdevs": 2, 00:18:49.473 "num_base_bdevs_discovered": 2, 00:18:49.473 "num_base_bdevs_operational": 2, 00:18:49.473 "base_bdevs_list": [ 00:18:49.473 { 00:18:49.473 "name": "pt1", 00:18:49.473 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:49.473 "is_configured": true, 00:18:49.473 "data_offset": 256, 00:18:49.473 "data_size": 7936 00:18:49.473 }, 00:18:49.473 { 00:18:49.473 "name": "pt2", 00:18:49.473 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:49.473 "is_configured": true, 00:18:49.473 "data_offset": 256, 00:18:49.473 "data_size": 7936 00:18:49.473 } 00:18:49.473 ] 00:18:49.473 } 00:18:49.473 } 00:18:49.473 }' 00:18:49.473 19:16:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:49.473 19:16:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:49.473 pt2' 00:18:49.474 19:16:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:49.474 19:16:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:18:49.474 19:16:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:49.474 19:16:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:49.474 19:16:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.474 19:16:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:49.474 19:16:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:49.474 19:16:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.474 19:16:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:18:49.474 19:16:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:18:49.474 19:16:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:49.474 19:16:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:49.474 19:16:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.474 19:16:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:49.474 19:16:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:49.474 19:16:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.474 19:16:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:18:49.474 19:16:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:18:49.474 19:16:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:49.474 19:16:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:18:49.474 19:16:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.474 19:16:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:49.474 [2024-11-27 19:16:59.056001] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:49.474 19:16:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.474 19:16:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=b24a8fba-1a70-4e90-bf00-f4167ad14f76 00:18:49.474 19:16:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@436 -- # '[' -z b24a8fba-1a70-4e90-bf00-f4167ad14f76 ']' 00:18:49.474 19:16:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:49.474 19:16:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.474 19:16:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:49.474 [2024-11-27 19:16:59.095661] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:49.474 [2024-11-27 19:16:59.095734] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:49.474 [2024-11-27 19:16:59.095835] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:49.474 [2024-11-27 19:16:59.095902] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:49.474 [2024-11-27 19:16:59.095936] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:49.474 19:16:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.474 19:16:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:49.474 19:16:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.474 19:16:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:49.474 19:16:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:18:49.736 19:16:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.736 19:16:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:18:49.736 19:16:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:18:49.736 19:16:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:49.736 19:16:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:18:49.736 19:16:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.736 19:16:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:49.736 19:16:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.736 19:16:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:49.736 19:16:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:18:49.736 19:16:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.736 19:16:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:49.736 19:16:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.736 19:16:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:18:49.736 19:16:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.736 19:16:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:18:49.736 19:16:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:49.736 19:16:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.736 19:16:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:18:49.736 19:16:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:49.736 19:16:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@652 -- # local es=0 00:18:49.736 19:16:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:49.736 19:16:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:49.736 19:16:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:49.736 19:16:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:49.736 19:16:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:49.736 19:16:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:49.736 19:16:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.737 19:16:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:49.737 [2024-11-27 19:16:59.235426] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:18:49.737 [2024-11-27 19:16:59.237126] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:18:49.737 [2024-11-27 19:16:59.237187] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:18:49.737 [2024-11-27 19:16:59.237231] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:18:49.737 [2024-11-27 19:16:59.237244] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:49.737 [2024-11-27 19:16:59.237254] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:18:49.737 request: 00:18:49.737 { 00:18:49.737 "name": "raid_bdev1", 00:18:49.737 "raid_level": "raid1", 00:18:49.737 "base_bdevs": [ 00:18:49.737 "malloc1", 00:18:49.737 "malloc2" 00:18:49.737 ], 00:18:49.737 "superblock": false, 00:18:49.737 "method": "bdev_raid_create", 00:18:49.737 "req_id": 1 00:18:49.737 } 00:18:49.737 Got JSON-RPC error response 00:18:49.737 response: 00:18:49.737 { 00:18:49.737 "code": -17, 00:18:49.737 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:18:49.737 } 00:18:49.737 19:16:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:49.737 19:16:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@655 -- # es=1 00:18:49.737 19:16:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:49.737 19:16:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:49.737 19:16:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:49.737 19:16:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:49.737 19:16:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:18:49.737 19:16:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.737 19:16:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:49.737 19:16:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.737 19:16:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:18:49.737 19:16:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:18:49.737 19:16:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:49.737 19:16:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.737 19:16:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:49.737 [2024-11-27 19:16:59.303300] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:49.737 [2024-11-27 19:16:59.303349] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:49.737 [2024-11-27 19:16:59.303363] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:18:49.737 [2024-11-27 19:16:59.303373] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:49.737 [2024-11-27 19:16:59.305270] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:49.737 [2024-11-27 19:16:59.305310] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:49.737 [2024-11-27 19:16:59.305349] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:49.737 [2024-11-27 19:16:59.305400] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:49.737 pt1 00:18:49.737 19:16:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.738 19:16:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:18:49.738 19:16:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:49.738 19:16:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:49.738 19:16:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:49.738 19:16:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:49.738 19:16:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:49.738 19:16:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:49.738 19:16:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:49.738 19:16:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:49.738 19:16:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:49.738 19:16:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:49.738 19:16:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:49.738 19:16:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.738 19:16:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:49.738 19:16:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.738 19:16:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:49.738 "name": "raid_bdev1", 00:18:49.738 "uuid": "b24a8fba-1a70-4e90-bf00-f4167ad14f76", 00:18:49.738 "strip_size_kb": 0, 00:18:49.738 "state": "configuring", 00:18:49.738 "raid_level": "raid1", 00:18:49.738 "superblock": true, 00:18:49.738 "num_base_bdevs": 2, 00:18:49.738 "num_base_bdevs_discovered": 1, 00:18:49.738 "num_base_bdevs_operational": 2, 00:18:49.738 "base_bdevs_list": [ 00:18:49.738 { 00:18:49.738 "name": "pt1", 00:18:49.738 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:49.738 "is_configured": true, 00:18:49.738 "data_offset": 256, 00:18:49.738 "data_size": 7936 00:18:49.738 }, 00:18:49.738 { 00:18:49.738 "name": null, 00:18:49.738 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:49.738 "is_configured": false, 00:18:49.738 "data_offset": 256, 00:18:49.738 "data_size": 7936 00:18:49.738 } 00:18:49.738 ] 00:18:49.738 }' 00:18:49.738 19:16:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:49.738 19:16:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:50.309 19:16:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:18:50.309 19:16:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:18:50.309 19:16:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:50.309 19:16:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:50.309 19:16:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.309 19:16:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:50.309 [2024-11-27 19:16:59.714582] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:50.309 [2024-11-27 19:16:59.714703] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:50.309 [2024-11-27 19:16:59.714752] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:18:50.309 [2024-11-27 19:16:59.714785] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:50.309 [2024-11-27 19:16:59.714969] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:50.309 [2024-11-27 19:16:59.715018] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:50.309 [2024-11-27 19:16:59.715077] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:50.309 [2024-11-27 19:16:59.715120] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:50.309 [2024-11-27 19:16:59.715236] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:50.309 [2024-11-27 19:16:59.715271] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:50.309 [2024-11-27 19:16:59.715357] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:18:50.309 [2024-11-27 19:16:59.715497] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:50.309 [2024-11-27 19:16:59.715532] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:18:50.309 [2024-11-27 19:16:59.715654] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:50.309 pt2 00:18:50.309 19:16:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.309 19:16:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:18:50.309 19:16:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:50.309 19:16:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:50.309 19:16:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:50.309 19:16:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:50.309 19:16:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:50.309 19:16:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:50.309 19:16:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:50.309 19:16:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:50.309 19:16:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:50.309 19:16:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:50.309 19:16:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:50.309 19:16:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:50.309 19:16:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.309 19:16:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:50.309 19:16:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:50.309 19:16:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.309 19:16:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:50.309 "name": "raid_bdev1", 00:18:50.309 "uuid": "b24a8fba-1a70-4e90-bf00-f4167ad14f76", 00:18:50.309 "strip_size_kb": 0, 00:18:50.309 "state": "online", 00:18:50.309 "raid_level": "raid1", 00:18:50.309 "superblock": true, 00:18:50.309 "num_base_bdevs": 2, 00:18:50.309 "num_base_bdevs_discovered": 2, 00:18:50.309 "num_base_bdevs_operational": 2, 00:18:50.309 "base_bdevs_list": [ 00:18:50.309 { 00:18:50.309 "name": "pt1", 00:18:50.309 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:50.309 "is_configured": true, 00:18:50.309 "data_offset": 256, 00:18:50.309 "data_size": 7936 00:18:50.309 }, 00:18:50.309 { 00:18:50.309 "name": "pt2", 00:18:50.309 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:50.309 "is_configured": true, 00:18:50.309 "data_offset": 256, 00:18:50.309 "data_size": 7936 00:18:50.309 } 00:18:50.309 ] 00:18:50.309 }' 00:18:50.309 19:16:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:50.309 19:16:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:50.569 19:17:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:18:50.569 19:17:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:50.569 19:17:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:50.569 19:17:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:50.569 19:17:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:18:50.569 19:17:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:50.569 19:17:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:50.569 19:17:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:50.569 19:17:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.569 19:17:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:50.569 [2024-11-27 19:17:00.174015] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:50.569 19:17:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.829 19:17:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:50.829 "name": "raid_bdev1", 00:18:50.829 "aliases": [ 00:18:50.829 "b24a8fba-1a70-4e90-bf00-f4167ad14f76" 00:18:50.829 ], 00:18:50.829 "product_name": "Raid Volume", 00:18:50.829 "block_size": 4096, 00:18:50.829 "num_blocks": 7936, 00:18:50.829 "uuid": "b24a8fba-1a70-4e90-bf00-f4167ad14f76", 00:18:50.829 "md_size": 32, 00:18:50.829 "md_interleave": false, 00:18:50.829 "dif_type": 0, 00:18:50.829 "assigned_rate_limits": { 00:18:50.829 "rw_ios_per_sec": 0, 00:18:50.829 "rw_mbytes_per_sec": 0, 00:18:50.829 "r_mbytes_per_sec": 0, 00:18:50.829 "w_mbytes_per_sec": 0 00:18:50.829 }, 00:18:50.829 "claimed": false, 00:18:50.829 "zoned": false, 00:18:50.829 "supported_io_types": { 00:18:50.829 "read": true, 00:18:50.829 "write": true, 00:18:50.829 "unmap": false, 00:18:50.829 "flush": false, 00:18:50.829 "reset": true, 00:18:50.829 "nvme_admin": false, 00:18:50.829 "nvme_io": false, 00:18:50.829 "nvme_io_md": false, 00:18:50.829 "write_zeroes": true, 00:18:50.829 "zcopy": false, 00:18:50.829 "get_zone_info": false, 00:18:50.829 "zone_management": false, 00:18:50.829 "zone_append": false, 00:18:50.829 "compare": false, 00:18:50.829 "compare_and_write": false, 00:18:50.829 "abort": false, 00:18:50.829 "seek_hole": false, 00:18:50.829 "seek_data": false, 00:18:50.829 "copy": false, 00:18:50.829 "nvme_iov_md": false 00:18:50.829 }, 00:18:50.829 "memory_domains": [ 00:18:50.829 { 00:18:50.829 "dma_device_id": "system", 00:18:50.829 "dma_device_type": 1 00:18:50.829 }, 00:18:50.829 { 00:18:50.829 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:50.829 "dma_device_type": 2 00:18:50.830 }, 00:18:50.830 { 00:18:50.830 "dma_device_id": "system", 00:18:50.830 "dma_device_type": 1 00:18:50.830 }, 00:18:50.830 { 00:18:50.830 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:50.830 "dma_device_type": 2 00:18:50.830 } 00:18:50.830 ], 00:18:50.830 "driver_specific": { 00:18:50.830 "raid": { 00:18:50.830 "uuid": "b24a8fba-1a70-4e90-bf00-f4167ad14f76", 00:18:50.830 "strip_size_kb": 0, 00:18:50.830 "state": "online", 00:18:50.830 "raid_level": "raid1", 00:18:50.830 "superblock": true, 00:18:50.830 "num_base_bdevs": 2, 00:18:50.830 "num_base_bdevs_discovered": 2, 00:18:50.830 "num_base_bdevs_operational": 2, 00:18:50.830 "base_bdevs_list": [ 00:18:50.830 { 00:18:50.830 "name": "pt1", 00:18:50.830 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:50.830 "is_configured": true, 00:18:50.830 "data_offset": 256, 00:18:50.830 "data_size": 7936 00:18:50.830 }, 00:18:50.830 { 00:18:50.830 "name": "pt2", 00:18:50.830 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:50.830 "is_configured": true, 00:18:50.830 "data_offset": 256, 00:18:50.830 "data_size": 7936 00:18:50.830 } 00:18:50.830 ] 00:18:50.830 } 00:18:50.830 } 00:18:50.830 }' 00:18:50.830 19:17:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:50.830 19:17:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:50.830 pt2' 00:18:50.830 19:17:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:50.830 19:17:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:18:50.830 19:17:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:50.830 19:17:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:50.830 19:17:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:50.830 19:17:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.830 19:17:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:50.830 19:17:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.830 19:17:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:18:50.830 19:17:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:18:50.830 19:17:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:50.830 19:17:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:50.830 19:17:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:50.830 19:17:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.830 19:17:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:50.830 19:17:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.830 19:17:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:18:50.830 19:17:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:18:50.830 19:17:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:18:50.830 19:17:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:50.830 19:17:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.830 19:17:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:50.830 [2024-11-27 19:17:00.401638] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:50.830 19:17:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.830 19:17:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # '[' b24a8fba-1a70-4e90-bf00-f4167ad14f76 '!=' b24a8fba-1a70-4e90-bf00-f4167ad14f76 ']' 00:18:50.830 19:17:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:18:50.830 19:17:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:50.830 19:17:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:18:50.830 19:17:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:18:50.830 19:17:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.830 19:17:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:50.830 [2024-11-27 19:17:00.429377] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:18:50.830 19:17:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.830 19:17:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:50.830 19:17:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:50.830 19:17:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:50.830 19:17:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:50.830 19:17:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:50.830 19:17:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:50.830 19:17:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:50.830 19:17:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:50.830 19:17:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:50.830 19:17:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:50.830 19:17:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:50.830 19:17:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:50.830 19:17:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.830 19:17:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:50.830 19:17:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.091 19:17:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:51.091 "name": "raid_bdev1", 00:18:51.091 "uuid": "b24a8fba-1a70-4e90-bf00-f4167ad14f76", 00:18:51.091 "strip_size_kb": 0, 00:18:51.091 "state": "online", 00:18:51.091 "raid_level": "raid1", 00:18:51.091 "superblock": true, 00:18:51.091 "num_base_bdevs": 2, 00:18:51.091 "num_base_bdevs_discovered": 1, 00:18:51.091 "num_base_bdevs_operational": 1, 00:18:51.091 "base_bdevs_list": [ 00:18:51.091 { 00:18:51.091 "name": null, 00:18:51.091 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:51.091 "is_configured": false, 00:18:51.091 "data_offset": 0, 00:18:51.091 "data_size": 7936 00:18:51.091 }, 00:18:51.091 { 00:18:51.091 "name": "pt2", 00:18:51.091 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:51.091 "is_configured": true, 00:18:51.091 "data_offset": 256, 00:18:51.091 "data_size": 7936 00:18:51.091 } 00:18:51.091 ] 00:18:51.091 }' 00:18:51.091 19:17:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:51.091 19:17:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:51.352 19:17:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:51.352 19:17:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.352 19:17:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:51.352 [2024-11-27 19:17:00.908554] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:51.352 [2024-11-27 19:17:00.908626] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:51.352 [2024-11-27 19:17:00.908707] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:51.352 [2024-11-27 19:17:00.908763] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:51.352 [2024-11-27 19:17:00.908796] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:18:51.352 19:17:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.352 19:17:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:51.352 19:17:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:18:51.352 19:17:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.352 19:17:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:51.352 19:17:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.352 19:17:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:18:51.352 19:17:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:18:51.352 19:17:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:18:51.352 19:17:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:51.352 19:17:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:18:51.352 19:17:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.352 19:17:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:51.352 19:17:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.352 19:17:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:18:51.352 19:17:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:51.352 19:17:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:18:51.352 19:17:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:18:51.352 19:17:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@519 -- # i=1 00:18:51.352 19:17:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:51.352 19:17:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.352 19:17:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:51.352 [2024-11-27 19:17:00.980433] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:51.352 [2024-11-27 19:17:00.980481] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:51.352 [2024-11-27 19:17:00.980494] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:18:51.352 [2024-11-27 19:17:00.980503] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:51.352 [2024-11-27 19:17:00.982306] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:51.352 [2024-11-27 19:17:00.982350] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:51.352 [2024-11-27 19:17:00.982390] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:51.352 [2024-11-27 19:17:00.982441] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:51.352 [2024-11-27 19:17:00.982521] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:18:51.352 [2024-11-27 19:17:00.982533] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:51.352 [2024-11-27 19:17:00.982600] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:51.352 [2024-11-27 19:17:00.982722] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:18:51.352 [2024-11-27 19:17:00.982731] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:18:51.352 [2024-11-27 19:17:00.982816] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:51.612 pt2 00:18:51.612 19:17:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.612 19:17:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:51.612 19:17:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:51.612 19:17:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:51.612 19:17:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:51.612 19:17:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:51.612 19:17:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:51.612 19:17:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:51.612 19:17:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:51.612 19:17:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:51.612 19:17:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:51.612 19:17:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:51.612 19:17:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:51.612 19:17:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.612 19:17:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:51.612 19:17:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.612 19:17:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:51.612 "name": "raid_bdev1", 00:18:51.612 "uuid": "b24a8fba-1a70-4e90-bf00-f4167ad14f76", 00:18:51.612 "strip_size_kb": 0, 00:18:51.612 "state": "online", 00:18:51.612 "raid_level": "raid1", 00:18:51.612 "superblock": true, 00:18:51.612 "num_base_bdevs": 2, 00:18:51.612 "num_base_bdevs_discovered": 1, 00:18:51.612 "num_base_bdevs_operational": 1, 00:18:51.612 "base_bdevs_list": [ 00:18:51.612 { 00:18:51.612 "name": null, 00:18:51.612 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:51.612 "is_configured": false, 00:18:51.612 "data_offset": 256, 00:18:51.612 "data_size": 7936 00:18:51.612 }, 00:18:51.612 { 00:18:51.612 "name": "pt2", 00:18:51.612 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:51.612 "is_configured": true, 00:18:51.612 "data_offset": 256, 00:18:51.612 "data_size": 7936 00:18:51.612 } 00:18:51.612 ] 00:18:51.612 }' 00:18:51.612 19:17:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:51.612 19:17:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:51.873 19:17:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:51.873 19:17:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.873 19:17:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:51.873 [2024-11-27 19:17:01.451728] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:51.873 [2024-11-27 19:17:01.451796] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:51.873 [2024-11-27 19:17:01.451867] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:51.873 [2024-11-27 19:17:01.451919] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:51.873 [2024-11-27 19:17:01.451949] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:18:51.873 19:17:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.873 19:17:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:51.873 19:17:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:18:51.873 19:17:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.873 19:17:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:51.873 19:17:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.133 19:17:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:18:52.133 19:17:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:18:52.133 19:17:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:18:52.133 19:17:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:52.133 19:17:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.133 19:17:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:52.133 [2024-11-27 19:17:01.515653] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:52.134 [2024-11-27 19:17:01.515756] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:52.134 [2024-11-27 19:17:01.515789] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:18:52.134 [2024-11-27 19:17:01.515816] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:52.134 [2024-11-27 19:17:01.517721] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:52.134 [2024-11-27 19:17:01.517783] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:52.134 [2024-11-27 19:17:01.517845] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:52.134 [2024-11-27 19:17:01.517904] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:52.134 [2024-11-27 19:17:01.518054] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:18:52.134 [2024-11-27 19:17:01.518103] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:52.134 [2024-11-27 19:17:01.518196] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:18:52.134 [2024-11-27 19:17:01.518277] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:52.134 [2024-11-27 19:17:01.518351] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:18:52.134 [2024-11-27 19:17:01.518359] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:52.134 [2024-11-27 19:17:01.518415] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:52.134 [2024-11-27 19:17:01.518516] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:18:52.134 [2024-11-27 19:17:01.518526] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:18:52.134 [2024-11-27 19:17:01.518628] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:52.134 pt1 00:18:52.134 19:17:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.134 19:17:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:18:52.134 19:17:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:52.134 19:17:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:52.134 19:17:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:52.134 19:17:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:52.134 19:17:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:52.134 19:17:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:52.134 19:17:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:52.134 19:17:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:52.134 19:17:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:52.134 19:17:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:52.134 19:17:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:52.134 19:17:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:52.134 19:17:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.134 19:17:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:52.134 19:17:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.134 19:17:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:52.134 "name": "raid_bdev1", 00:18:52.134 "uuid": "b24a8fba-1a70-4e90-bf00-f4167ad14f76", 00:18:52.134 "strip_size_kb": 0, 00:18:52.134 "state": "online", 00:18:52.134 "raid_level": "raid1", 00:18:52.134 "superblock": true, 00:18:52.134 "num_base_bdevs": 2, 00:18:52.134 "num_base_bdevs_discovered": 1, 00:18:52.134 "num_base_bdevs_operational": 1, 00:18:52.134 "base_bdevs_list": [ 00:18:52.134 { 00:18:52.134 "name": null, 00:18:52.134 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:52.134 "is_configured": false, 00:18:52.134 "data_offset": 256, 00:18:52.134 "data_size": 7936 00:18:52.134 }, 00:18:52.134 { 00:18:52.134 "name": "pt2", 00:18:52.134 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:52.134 "is_configured": true, 00:18:52.134 "data_offset": 256, 00:18:52.134 "data_size": 7936 00:18:52.134 } 00:18:52.134 ] 00:18:52.134 }' 00:18:52.134 19:17:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:52.134 19:17:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:52.395 19:17:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:18:52.395 19:17:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:18:52.395 19:17:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.395 19:17:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:52.395 19:17:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.395 19:17:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:18:52.395 19:17:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:52.395 19:17:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:18:52.395 19:17:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.395 19:17:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:52.395 [2024-11-27 19:17:01.995032] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:52.395 19:17:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.656 19:17:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # '[' b24a8fba-1a70-4e90-bf00-f4167ad14f76 '!=' b24a8fba-1a70-4e90-bf00-f4167ad14f76 ']' 00:18:52.656 19:17:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@563 -- # killprocess 87514 00:18:52.656 19:17:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@954 -- # '[' -z 87514 ']' 00:18:52.656 19:17:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@958 -- # kill -0 87514 00:18:52.656 19:17:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@959 -- # uname 00:18:52.656 19:17:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:52.656 19:17:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87514 00:18:52.656 19:17:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:52.656 19:17:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:52.656 19:17:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87514' 00:18:52.656 killing process with pid 87514 00:18:52.656 19:17:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@973 -- # kill 87514 00:18:52.656 [2024-11-27 19:17:02.063503] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:52.656 [2024-11-27 19:17:02.063589] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:52.656 [2024-11-27 19:17:02.063634] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:52.656 [2024-11-27 19:17:02.063650] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:18:52.656 19:17:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@978 -- # wait 87514 00:18:52.656 [2024-11-27 19:17:02.272560] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:54.040 19:17:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@565 -- # return 0 00:18:54.040 00:18:54.040 real 0m5.983s 00:18:54.040 user 0m9.083s 00:18:54.040 sys 0m1.093s 00:18:54.040 ************************************ 00:18:54.040 END TEST raid_superblock_test_md_separate 00:18:54.040 ************************************ 00:18:54.040 19:17:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:54.040 19:17:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:54.040 19:17:03 bdev_raid -- bdev/bdev_raid.sh@1006 -- # '[' true = true ']' 00:18:54.040 19:17:03 bdev_raid -- bdev/bdev_raid.sh@1007 -- # run_test raid_rebuild_test_sb_md_separate raid_rebuild_test raid1 2 true false true 00:18:54.040 19:17:03 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:18:54.040 19:17:03 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:54.040 19:17:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:54.040 ************************************ 00:18:54.040 START TEST raid_rebuild_test_sb_md_separate 00:18:54.040 ************************************ 00:18:54.040 19:17:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:18:54.040 19:17:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:18:54.040 19:17:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:18:54.040 19:17:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:18:54.040 19:17:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:18:54.040 19:17:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # local verify=true 00:18:54.040 19:17:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:18:54.040 19:17:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:54.040 19:17:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:18:54.040 19:17:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:54.040 19:17:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:54.040 19:17:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:18:54.040 19:17:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:54.040 19:17:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:54.040 19:17:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:54.040 19:17:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:18:54.040 19:17:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:18:54.040 19:17:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # local strip_size 00:18:54.040 19:17:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@577 -- # local create_arg 00:18:54.040 19:17:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:18:54.040 19:17:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@579 -- # local data_offset 00:18:54.040 19:17:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:18:54.040 19:17:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:18:54.040 19:17:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:18:54.040 19:17:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:18:54.040 19:17:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@597 -- # raid_pid=87832 00:18:54.040 19:17:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:18:54.040 19:17:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@598 -- # waitforlisten 87832 00:18:54.040 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:54.040 19:17:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@835 -- # '[' -z 87832 ']' 00:18:54.040 19:17:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:54.040 19:17:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:54.040 19:17:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:54.040 19:17:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:54.040 19:17:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:54.040 I/O size of 3145728 is greater than zero copy threshold (65536). 00:18:54.040 Zero copy mechanism will not be used. 00:18:54.040 [2024-11-27 19:17:03.511180] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:18:54.040 [2024-11-27 19:17:03.511287] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87832 ] 00:18:54.300 [2024-11-27 19:17:03.685157] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:54.300 [2024-11-27 19:17:03.790783] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:54.560 [2024-11-27 19:17:03.966997] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:54.560 [2024-11-27 19:17:03.967056] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:54.820 19:17:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:54.820 19:17:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # return 0 00:18:54.820 19:17:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:54.820 19:17:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1_malloc 00:18:54.820 19:17:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.820 19:17:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:54.820 BaseBdev1_malloc 00:18:54.820 19:17:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.820 19:17:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:54.820 19:17:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.820 19:17:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:54.820 [2024-11-27 19:17:04.369748] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:54.820 [2024-11-27 19:17:04.369873] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:54.820 [2024-11-27 19:17:04.369900] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:54.820 [2024-11-27 19:17:04.369911] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:54.820 [2024-11-27 19:17:04.371758] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:54.820 [2024-11-27 19:17:04.371792] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:54.820 BaseBdev1 00:18:54.820 19:17:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.820 19:17:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:54.820 19:17:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2_malloc 00:18:54.820 19:17:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.820 19:17:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:54.820 BaseBdev2_malloc 00:18:54.820 19:17:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.820 19:17:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:18:54.820 19:17:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.820 19:17:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:54.820 [2024-11-27 19:17:04.425970] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:18:54.820 [2024-11-27 19:17:04.426028] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:54.820 [2024-11-27 19:17:04.426047] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:54.820 [2024-11-27 19:17:04.426058] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:54.820 [2024-11-27 19:17:04.427843] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:54.820 [2024-11-27 19:17:04.427874] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:54.820 BaseBdev2 00:18:54.820 19:17:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.820 19:17:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b spare_malloc 00:18:54.820 19:17:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.820 19:17:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:55.081 spare_malloc 00:18:55.081 19:17:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.081 19:17:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:18:55.081 19:17:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.081 19:17:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:55.081 spare_delay 00:18:55.081 19:17:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.081 19:17:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:55.081 19:17:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.081 19:17:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:55.081 [2024-11-27 19:17:04.524373] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:55.081 [2024-11-27 19:17:04.524425] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:55.081 [2024-11-27 19:17:04.524445] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:18:55.081 [2024-11-27 19:17:04.524456] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:55.081 [2024-11-27 19:17:04.526263] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:55.081 [2024-11-27 19:17:04.526300] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:55.081 spare 00:18:55.081 19:17:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.081 19:17:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:18:55.081 19:17:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.081 19:17:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:55.081 [2024-11-27 19:17:04.536390] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:55.081 [2024-11-27 19:17:04.538105] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:55.081 [2024-11-27 19:17:04.538290] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:55.081 [2024-11-27 19:17:04.538304] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:55.081 [2024-11-27 19:17:04.538370] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:18:55.081 [2024-11-27 19:17:04.538480] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:55.081 [2024-11-27 19:17:04.538509] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:55.081 [2024-11-27 19:17:04.538621] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:55.081 19:17:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.081 19:17:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:55.081 19:17:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:55.081 19:17:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:55.081 19:17:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:55.081 19:17:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:55.081 19:17:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:55.081 19:17:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:55.081 19:17:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:55.081 19:17:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:55.081 19:17:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:55.081 19:17:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:55.081 19:17:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:55.081 19:17:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.081 19:17:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:55.081 19:17:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.081 19:17:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:55.081 "name": "raid_bdev1", 00:18:55.081 "uuid": "788966c6-28ee-471e-b1a1-c3fd29fac0f6", 00:18:55.081 "strip_size_kb": 0, 00:18:55.081 "state": "online", 00:18:55.081 "raid_level": "raid1", 00:18:55.081 "superblock": true, 00:18:55.081 "num_base_bdevs": 2, 00:18:55.081 "num_base_bdevs_discovered": 2, 00:18:55.081 "num_base_bdevs_operational": 2, 00:18:55.081 "base_bdevs_list": [ 00:18:55.081 { 00:18:55.081 "name": "BaseBdev1", 00:18:55.081 "uuid": "486f90c0-0cb3-5476-99a5-c837e1db66bb", 00:18:55.081 "is_configured": true, 00:18:55.081 "data_offset": 256, 00:18:55.081 "data_size": 7936 00:18:55.081 }, 00:18:55.081 { 00:18:55.081 "name": "BaseBdev2", 00:18:55.081 "uuid": "937df7da-58c9-51f7-b668-64aa724e49fc", 00:18:55.081 "is_configured": true, 00:18:55.081 "data_offset": 256, 00:18:55.081 "data_size": 7936 00:18:55.081 } 00:18:55.081 ] 00:18:55.081 }' 00:18:55.081 19:17:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:55.081 19:17:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:55.341 19:17:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:18:55.341 19:17:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:55.341 19:17:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.341 19:17:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:55.341 [2024-11-27 19:17:04.951924] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:55.341 19:17:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.341 19:17:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:18:55.601 19:17:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:55.601 19:17:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:18:55.601 19:17:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.601 19:17:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:55.601 19:17:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.601 19:17:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:18:55.601 19:17:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:18:55.601 19:17:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:18:55.601 19:17:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:18:55.601 19:17:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:18:55.601 19:17:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:55.601 19:17:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:18:55.601 19:17:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:55.601 19:17:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:18:55.601 19:17:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:55.601 19:17:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:18:55.601 19:17:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:55.601 19:17:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:55.602 19:17:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:18:55.602 [2024-11-27 19:17:05.207294] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:55.602 /dev/nbd0 00:18:55.862 19:17:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:55.862 19:17:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:55.862 19:17:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:55.862 19:17:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:18:55.862 19:17:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:55.862 19:17:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:55.862 19:17:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:55.862 19:17:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:18:55.862 19:17:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:55.862 19:17:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:55.862 19:17:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:55.862 1+0 records in 00:18:55.862 1+0 records out 00:18:55.862 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000441958 s, 9.3 MB/s 00:18:55.862 19:17:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:55.862 19:17:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:18:55.862 19:17:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:55.862 19:17:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:55.862 19:17:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:18:55.862 19:17:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:55.862 19:17:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:55.862 19:17:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:18:55.862 19:17:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:18:55.862 19:17:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:18:56.432 7936+0 records in 00:18:56.432 7936+0 records out 00:18:56.432 32505856 bytes (33 MB, 31 MiB) copied, 0.586281 s, 55.4 MB/s 00:18:56.432 19:17:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:18:56.432 19:17:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:56.432 19:17:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:56.432 19:17:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:56.432 19:17:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:18:56.432 19:17:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:56.432 19:17:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:56.692 [2024-11-27 19:17:06.077754] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:56.692 19:17:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:56.692 19:17:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:56.692 19:17:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:56.692 19:17:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:56.692 19:17:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:56.692 19:17:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:56.692 19:17:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:18:56.692 19:17:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:18:56.692 19:17:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:18:56.692 19:17:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.692 19:17:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:56.692 [2024-11-27 19:17:06.110661] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:56.692 19:17:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.692 19:17:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:56.692 19:17:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:56.693 19:17:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:56.693 19:17:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:56.693 19:17:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:56.693 19:17:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:56.693 19:17:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:56.693 19:17:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:56.693 19:17:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:56.693 19:17:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:56.693 19:17:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:56.693 19:17:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:56.693 19:17:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.693 19:17:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:56.693 19:17:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.693 19:17:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:56.693 "name": "raid_bdev1", 00:18:56.693 "uuid": "788966c6-28ee-471e-b1a1-c3fd29fac0f6", 00:18:56.693 "strip_size_kb": 0, 00:18:56.693 "state": "online", 00:18:56.693 "raid_level": "raid1", 00:18:56.693 "superblock": true, 00:18:56.693 "num_base_bdevs": 2, 00:18:56.693 "num_base_bdevs_discovered": 1, 00:18:56.693 "num_base_bdevs_operational": 1, 00:18:56.693 "base_bdevs_list": [ 00:18:56.693 { 00:18:56.693 "name": null, 00:18:56.693 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:56.693 "is_configured": false, 00:18:56.693 "data_offset": 0, 00:18:56.693 "data_size": 7936 00:18:56.693 }, 00:18:56.693 { 00:18:56.693 "name": "BaseBdev2", 00:18:56.693 "uuid": "937df7da-58c9-51f7-b668-64aa724e49fc", 00:18:56.693 "is_configured": true, 00:18:56.693 "data_offset": 256, 00:18:56.693 "data_size": 7936 00:18:56.693 } 00:18:56.693 ] 00:18:56.693 }' 00:18:56.693 19:17:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:56.693 19:17:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:56.953 19:17:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:56.953 19:17:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.953 19:17:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:56.953 [2024-11-27 19:17:06.541901] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:56.953 [2024-11-27 19:17:06.555857] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:18:56.953 19:17:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.953 19:17:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@647 -- # sleep 1 00:18:56.953 [2024-11-27 19:17:06.557609] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:58.335 19:17:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:58.335 19:17:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:58.335 19:17:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:58.335 19:17:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:58.335 19:17:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:58.335 19:17:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:58.335 19:17:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:58.335 19:17:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.335 19:17:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:58.335 19:17:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.335 19:17:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:58.335 "name": "raid_bdev1", 00:18:58.335 "uuid": "788966c6-28ee-471e-b1a1-c3fd29fac0f6", 00:18:58.335 "strip_size_kb": 0, 00:18:58.335 "state": "online", 00:18:58.335 "raid_level": "raid1", 00:18:58.335 "superblock": true, 00:18:58.335 "num_base_bdevs": 2, 00:18:58.335 "num_base_bdevs_discovered": 2, 00:18:58.335 "num_base_bdevs_operational": 2, 00:18:58.335 "process": { 00:18:58.335 "type": "rebuild", 00:18:58.335 "target": "spare", 00:18:58.335 "progress": { 00:18:58.335 "blocks": 2560, 00:18:58.335 "percent": 32 00:18:58.335 } 00:18:58.335 }, 00:18:58.335 "base_bdevs_list": [ 00:18:58.335 { 00:18:58.335 "name": "spare", 00:18:58.335 "uuid": "bceae672-d23a-5b0e-abb2-b267155b3c0f", 00:18:58.335 "is_configured": true, 00:18:58.335 "data_offset": 256, 00:18:58.335 "data_size": 7936 00:18:58.335 }, 00:18:58.335 { 00:18:58.335 "name": "BaseBdev2", 00:18:58.335 "uuid": "937df7da-58c9-51f7-b668-64aa724e49fc", 00:18:58.335 "is_configured": true, 00:18:58.335 "data_offset": 256, 00:18:58.335 "data_size": 7936 00:18:58.335 } 00:18:58.335 ] 00:18:58.335 }' 00:18:58.335 19:17:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:58.335 19:17:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:58.335 19:17:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:58.335 19:17:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:58.335 19:17:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:58.335 19:17:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.335 19:17:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:58.335 [2024-11-27 19:17:07.725340] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:58.335 [2024-11-27 19:17:07.762345] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:58.335 [2024-11-27 19:17:07.762397] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:58.335 [2024-11-27 19:17:07.762410] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:58.335 [2024-11-27 19:17:07.762421] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:58.335 19:17:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.335 19:17:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:58.335 19:17:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:58.335 19:17:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:58.335 19:17:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:58.335 19:17:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:58.335 19:17:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:58.335 19:17:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:58.335 19:17:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:58.335 19:17:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:58.335 19:17:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:58.335 19:17:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:58.335 19:17:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:58.335 19:17:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.335 19:17:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:58.335 19:17:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.335 19:17:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:58.335 "name": "raid_bdev1", 00:18:58.335 "uuid": "788966c6-28ee-471e-b1a1-c3fd29fac0f6", 00:18:58.335 "strip_size_kb": 0, 00:18:58.335 "state": "online", 00:18:58.335 "raid_level": "raid1", 00:18:58.335 "superblock": true, 00:18:58.336 "num_base_bdevs": 2, 00:18:58.336 "num_base_bdevs_discovered": 1, 00:18:58.336 "num_base_bdevs_operational": 1, 00:18:58.336 "base_bdevs_list": [ 00:18:58.336 { 00:18:58.336 "name": null, 00:18:58.336 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:58.336 "is_configured": false, 00:18:58.336 "data_offset": 0, 00:18:58.336 "data_size": 7936 00:18:58.336 }, 00:18:58.336 { 00:18:58.336 "name": "BaseBdev2", 00:18:58.336 "uuid": "937df7da-58c9-51f7-b668-64aa724e49fc", 00:18:58.336 "is_configured": true, 00:18:58.336 "data_offset": 256, 00:18:58.336 "data_size": 7936 00:18:58.336 } 00:18:58.336 ] 00:18:58.336 }' 00:18:58.336 19:17:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:58.336 19:17:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:58.904 19:17:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:58.904 19:17:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:58.904 19:17:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:58.904 19:17:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:58.904 19:17:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:58.904 19:17:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:58.904 19:17:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.904 19:17:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:58.904 19:17:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:58.904 19:17:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.904 19:17:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:58.904 "name": "raid_bdev1", 00:18:58.904 "uuid": "788966c6-28ee-471e-b1a1-c3fd29fac0f6", 00:18:58.904 "strip_size_kb": 0, 00:18:58.904 "state": "online", 00:18:58.904 "raid_level": "raid1", 00:18:58.904 "superblock": true, 00:18:58.904 "num_base_bdevs": 2, 00:18:58.905 "num_base_bdevs_discovered": 1, 00:18:58.905 "num_base_bdevs_operational": 1, 00:18:58.905 "base_bdevs_list": [ 00:18:58.905 { 00:18:58.905 "name": null, 00:18:58.905 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:58.905 "is_configured": false, 00:18:58.905 "data_offset": 0, 00:18:58.905 "data_size": 7936 00:18:58.905 }, 00:18:58.905 { 00:18:58.905 "name": "BaseBdev2", 00:18:58.905 "uuid": "937df7da-58c9-51f7-b668-64aa724e49fc", 00:18:58.905 "is_configured": true, 00:18:58.905 "data_offset": 256, 00:18:58.905 "data_size": 7936 00:18:58.905 } 00:18:58.905 ] 00:18:58.905 }' 00:18:58.905 19:17:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:58.905 19:17:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:58.905 19:17:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:58.905 19:17:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:58.905 19:17:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:58.905 19:17:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.905 19:17:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:58.905 [2024-11-27 19:17:08.364603] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:58.905 [2024-11-27 19:17:08.378108] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:18:58.905 19:17:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.905 19:17:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@663 -- # sleep 1 00:18:58.905 [2024-11-27 19:17:08.379917] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:59.844 19:17:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:59.844 19:17:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:59.844 19:17:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:59.845 19:17:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:59.845 19:17:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:59.845 19:17:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:59.845 19:17:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:59.845 19:17:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.845 19:17:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:59.845 19:17:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.845 19:17:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:59.845 "name": "raid_bdev1", 00:18:59.845 "uuid": "788966c6-28ee-471e-b1a1-c3fd29fac0f6", 00:18:59.845 "strip_size_kb": 0, 00:18:59.845 "state": "online", 00:18:59.845 "raid_level": "raid1", 00:18:59.845 "superblock": true, 00:18:59.845 "num_base_bdevs": 2, 00:18:59.845 "num_base_bdevs_discovered": 2, 00:18:59.845 "num_base_bdevs_operational": 2, 00:18:59.845 "process": { 00:18:59.845 "type": "rebuild", 00:18:59.845 "target": "spare", 00:18:59.845 "progress": { 00:18:59.845 "blocks": 2560, 00:18:59.845 "percent": 32 00:18:59.845 } 00:18:59.845 }, 00:18:59.845 "base_bdevs_list": [ 00:18:59.845 { 00:18:59.845 "name": "spare", 00:18:59.845 "uuid": "bceae672-d23a-5b0e-abb2-b267155b3c0f", 00:18:59.845 "is_configured": true, 00:18:59.845 "data_offset": 256, 00:18:59.845 "data_size": 7936 00:18:59.845 }, 00:18:59.845 { 00:18:59.845 "name": "BaseBdev2", 00:18:59.845 "uuid": "937df7da-58c9-51f7-b668-64aa724e49fc", 00:18:59.845 "is_configured": true, 00:18:59.845 "data_offset": 256, 00:18:59.845 "data_size": 7936 00:18:59.845 } 00:18:59.845 ] 00:18:59.845 }' 00:18:59.845 19:17:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:00.105 19:17:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:00.105 19:17:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:00.105 19:17:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:00.105 19:17:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:19:00.105 19:17:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:19:00.105 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:19:00.105 19:17:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:19:00.105 19:17:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:19:00.105 19:17:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:19:00.105 19:17:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # local timeout=711 00:19:00.105 19:17:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:00.105 19:17:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:00.105 19:17:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:00.105 19:17:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:00.105 19:17:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:00.105 19:17:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:00.105 19:17:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:00.105 19:17:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:00.105 19:17:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.105 19:17:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:00.105 19:17:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.105 19:17:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:00.105 "name": "raid_bdev1", 00:19:00.105 "uuid": "788966c6-28ee-471e-b1a1-c3fd29fac0f6", 00:19:00.105 "strip_size_kb": 0, 00:19:00.105 "state": "online", 00:19:00.105 "raid_level": "raid1", 00:19:00.105 "superblock": true, 00:19:00.105 "num_base_bdevs": 2, 00:19:00.105 "num_base_bdevs_discovered": 2, 00:19:00.105 "num_base_bdevs_operational": 2, 00:19:00.105 "process": { 00:19:00.105 "type": "rebuild", 00:19:00.105 "target": "spare", 00:19:00.105 "progress": { 00:19:00.105 "blocks": 2816, 00:19:00.105 "percent": 35 00:19:00.105 } 00:19:00.105 }, 00:19:00.105 "base_bdevs_list": [ 00:19:00.105 { 00:19:00.105 "name": "spare", 00:19:00.105 "uuid": "bceae672-d23a-5b0e-abb2-b267155b3c0f", 00:19:00.105 "is_configured": true, 00:19:00.105 "data_offset": 256, 00:19:00.105 "data_size": 7936 00:19:00.105 }, 00:19:00.105 { 00:19:00.105 "name": "BaseBdev2", 00:19:00.105 "uuid": "937df7da-58c9-51f7-b668-64aa724e49fc", 00:19:00.105 "is_configured": true, 00:19:00.105 "data_offset": 256, 00:19:00.105 "data_size": 7936 00:19:00.105 } 00:19:00.105 ] 00:19:00.105 }' 00:19:00.105 19:17:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:00.105 19:17:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:00.105 19:17:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:00.105 19:17:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:00.105 19:17:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:01.045 19:17:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:01.045 19:17:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:01.045 19:17:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:01.045 19:17:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:01.045 19:17:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:01.045 19:17:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:01.305 19:17:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:01.305 19:17:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:01.305 19:17:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.305 19:17:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:01.305 19:17:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.305 19:17:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:01.305 "name": "raid_bdev1", 00:19:01.305 "uuid": "788966c6-28ee-471e-b1a1-c3fd29fac0f6", 00:19:01.305 "strip_size_kb": 0, 00:19:01.305 "state": "online", 00:19:01.305 "raid_level": "raid1", 00:19:01.305 "superblock": true, 00:19:01.305 "num_base_bdevs": 2, 00:19:01.305 "num_base_bdevs_discovered": 2, 00:19:01.305 "num_base_bdevs_operational": 2, 00:19:01.305 "process": { 00:19:01.305 "type": "rebuild", 00:19:01.305 "target": "spare", 00:19:01.305 "progress": { 00:19:01.305 "blocks": 5632, 00:19:01.305 "percent": 70 00:19:01.305 } 00:19:01.305 }, 00:19:01.305 "base_bdevs_list": [ 00:19:01.305 { 00:19:01.305 "name": "spare", 00:19:01.305 "uuid": "bceae672-d23a-5b0e-abb2-b267155b3c0f", 00:19:01.305 "is_configured": true, 00:19:01.305 "data_offset": 256, 00:19:01.305 "data_size": 7936 00:19:01.305 }, 00:19:01.305 { 00:19:01.305 "name": "BaseBdev2", 00:19:01.305 "uuid": "937df7da-58c9-51f7-b668-64aa724e49fc", 00:19:01.305 "is_configured": true, 00:19:01.305 "data_offset": 256, 00:19:01.305 "data_size": 7936 00:19:01.305 } 00:19:01.305 ] 00:19:01.305 }' 00:19:01.305 19:17:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:01.305 19:17:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:01.305 19:17:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:01.305 19:17:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:01.305 19:17:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:01.875 [2024-11-27 19:17:11.491495] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:19:01.875 [2024-11-27 19:17:11.491563] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:19:01.875 [2024-11-27 19:17:11.491663] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:02.449 19:17:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:02.449 19:17:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:02.449 19:17:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:02.449 19:17:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:02.449 19:17:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:02.449 19:17:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:02.449 19:17:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:02.449 19:17:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:02.449 19:17:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.449 19:17:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:02.449 19:17:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.449 19:17:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:02.449 "name": "raid_bdev1", 00:19:02.449 "uuid": "788966c6-28ee-471e-b1a1-c3fd29fac0f6", 00:19:02.449 "strip_size_kb": 0, 00:19:02.449 "state": "online", 00:19:02.449 "raid_level": "raid1", 00:19:02.449 "superblock": true, 00:19:02.449 "num_base_bdevs": 2, 00:19:02.449 "num_base_bdevs_discovered": 2, 00:19:02.449 "num_base_bdevs_operational": 2, 00:19:02.449 "base_bdevs_list": [ 00:19:02.449 { 00:19:02.449 "name": "spare", 00:19:02.449 "uuid": "bceae672-d23a-5b0e-abb2-b267155b3c0f", 00:19:02.449 "is_configured": true, 00:19:02.449 "data_offset": 256, 00:19:02.449 "data_size": 7936 00:19:02.449 }, 00:19:02.449 { 00:19:02.449 "name": "BaseBdev2", 00:19:02.449 "uuid": "937df7da-58c9-51f7-b668-64aa724e49fc", 00:19:02.449 "is_configured": true, 00:19:02.449 "data_offset": 256, 00:19:02.449 "data_size": 7936 00:19:02.449 } 00:19:02.449 ] 00:19:02.449 }' 00:19:02.449 19:17:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:02.449 19:17:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:19:02.449 19:17:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:02.449 19:17:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:19:02.449 19:17:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@709 -- # break 00:19:02.449 19:17:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:02.449 19:17:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:02.449 19:17:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:02.449 19:17:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:02.449 19:17:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:02.449 19:17:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:02.449 19:17:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.449 19:17:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:02.449 19:17:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:02.449 19:17:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.449 19:17:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:02.449 "name": "raid_bdev1", 00:19:02.449 "uuid": "788966c6-28ee-471e-b1a1-c3fd29fac0f6", 00:19:02.449 "strip_size_kb": 0, 00:19:02.449 "state": "online", 00:19:02.449 "raid_level": "raid1", 00:19:02.449 "superblock": true, 00:19:02.449 "num_base_bdevs": 2, 00:19:02.449 "num_base_bdevs_discovered": 2, 00:19:02.449 "num_base_bdevs_operational": 2, 00:19:02.449 "base_bdevs_list": [ 00:19:02.449 { 00:19:02.449 "name": "spare", 00:19:02.449 "uuid": "bceae672-d23a-5b0e-abb2-b267155b3c0f", 00:19:02.449 "is_configured": true, 00:19:02.449 "data_offset": 256, 00:19:02.449 "data_size": 7936 00:19:02.449 }, 00:19:02.449 { 00:19:02.449 "name": "BaseBdev2", 00:19:02.449 "uuid": "937df7da-58c9-51f7-b668-64aa724e49fc", 00:19:02.449 "is_configured": true, 00:19:02.449 "data_offset": 256, 00:19:02.449 "data_size": 7936 00:19:02.449 } 00:19:02.449 ] 00:19:02.449 }' 00:19:02.449 19:17:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:02.449 19:17:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:02.449 19:17:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:02.709 19:17:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:02.709 19:17:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:02.709 19:17:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:02.709 19:17:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:02.709 19:17:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:02.709 19:17:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:02.709 19:17:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:02.709 19:17:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:02.709 19:17:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:02.709 19:17:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:02.709 19:17:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:02.709 19:17:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:02.709 19:17:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:02.709 19:17:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.709 19:17:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:02.709 19:17:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.709 19:17:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:02.709 "name": "raid_bdev1", 00:19:02.709 "uuid": "788966c6-28ee-471e-b1a1-c3fd29fac0f6", 00:19:02.709 "strip_size_kb": 0, 00:19:02.709 "state": "online", 00:19:02.709 "raid_level": "raid1", 00:19:02.709 "superblock": true, 00:19:02.709 "num_base_bdevs": 2, 00:19:02.709 "num_base_bdevs_discovered": 2, 00:19:02.709 "num_base_bdevs_operational": 2, 00:19:02.709 "base_bdevs_list": [ 00:19:02.709 { 00:19:02.709 "name": "spare", 00:19:02.709 "uuid": "bceae672-d23a-5b0e-abb2-b267155b3c0f", 00:19:02.709 "is_configured": true, 00:19:02.709 "data_offset": 256, 00:19:02.709 "data_size": 7936 00:19:02.709 }, 00:19:02.709 { 00:19:02.709 "name": "BaseBdev2", 00:19:02.709 "uuid": "937df7da-58c9-51f7-b668-64aa724e49fc", 00:19:02.709 "is_configured": true, 00:19:02.709 "data_offset": 256, 00:19:02.709 "data_size": 7936 00:19:02.709 } 00:19:02.709 ] 00:19:02.709 }' 00:19:02.709 19:17:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:02.710 19:17:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:02.970 19:17:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:02.970 19:17:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.970 19:17:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:02.970 [2024-11-27 19:17:12.524871] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:02.970 [2024-11-27 19:17:12.524904] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:02.970 [2024-11-27 19:17:12.524976] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:02.970 [2024-11-27 19:17:12.525036] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:02.970 [2024-11-27 19:17:12.525046] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:19:02.970 19:17:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.970 19:17:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:02.970 19:17:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.970 19:17:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # jq length 00:19:02.970 19:17:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:02.970 19:17:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.970 19:17:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:19:02.970 19:17:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:19:02.970 19:17:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:19:02.970 19:17:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:19:02.970 19:17:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:02.970 19:17:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:19:02.970 19:17:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:02.970 19:17:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:02.970 19:17:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:02.970 19:17:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:19:02.970 19:17:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:02.970 19:17:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:02.970 19:17:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:19:03.230 /dev/nbd0 00:19:03.230 19:17:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:03.230 19:17:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:03.230 19:17:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:19:03.231 19:17:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:19:03.231 19:17:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:03.231 19:17:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:03.231 19:17:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:19:03.231 19:17:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:19:03.231 19:17:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:03.231 19:17:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:03.231 19:17:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:03.231 1+0 records in 00:19:03.231 1+0 records out 00:19:03.231 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000334762 s, 12.2 MB/s 00:19:03.231 19:17:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:03.231 19:17:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:19:03.231 19:17:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:03.231 19:17:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:03.231 19:17:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:19:03.231 19:17:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:03.231 19:17:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:03.231 19:17:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:19:03.491 /dev/nbd1 00:19:03.491 19:17:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:19:03.491 19:17:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:19:03.491 19:17:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:19:03.491 19:17:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:19:03.491 19:17:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:03.491 19:17:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:03.491 19:17:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:19:03.491 19:17:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:19:03.491 19:17:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:03.491 19:17:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:03.491 19:17:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:03.491 1+0 records in 00:19:03.491 1+0 records out 00:19:03.491 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000434953 s, 9.4 MB/s 00:19:03.491 19:17:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:03.491 19:17:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:19:03.491 19:17:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:03.491 19:17:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:03.491 19:17:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:19:03.491 19:17:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:03.491 19:17:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:03.491 19:17:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:19:03.751 19:17:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:19:03.751 19:17:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:03.751 19:17:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:03.751 19:17:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:03.751 19:17:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:19:03.751 19:17:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:03.751 19:17:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:19:04.010 19:17:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:04.010 19:17:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:04.010 19:17:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:04.010 19:17:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:04.010 19:17:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:04.010 19:17:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:04.010 19:17:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:19:04.010 19:17:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:19:04.010 19:17:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:04.010 19:17:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:19:04.271 19:17:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:19:04.271 19:17:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:19:04.271 19:17:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:19:04.271 19:17:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:04.271 19:17:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:04.271 19:17:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:19:04.271 19:17:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:19:04.271 19:17:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:19:04.271 19:17:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:19:04.271 19:17:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:19:04.271 19:17:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.271 19:17:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:04.271 19:17:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.271 19:17:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:04.271 19:17:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.271 19:17:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:04.271 [2024-11-27 19:17:13.711918] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:04.271 [2024-11-27 19:17:13.711987] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:04.271 [2024-11-27 19:17:13.712017] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:19:04.271 [2024-11-27 19:17:13.712031] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:04.271 [2024-11-27 19:17:13.714658] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:04.271 [2024-11-27 19:17:13.714723] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:04.271 [2024-11-27 19:17:13.714800] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:19:04.271 [2024-11-27 19:17:13.714869] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:04.271 [2024-11-27 19:17:13.715076] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:04.271 spare 00:19:04.271 19:17:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.271 19:17:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:19:04.271 19:17:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.271 19:17:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:04.271 [2024-11-27 19:17:13.814996] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:19:04.271 [2024-11-27 19:17:13.815026] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:04.271 [2024-11-27 19:17:13.815113] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:19:04.271 [2024-11-27 19:17:13.815241] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:19:04.271 [2024-11-27 19:17:13.815250] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:19:04.271 [2024-11-27 19:17:13.815352] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:04.271 19:17:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.271 19:17:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:04.271 19:17:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:04.271 19:17:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:04.271 19:17:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:04.271 19:17:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:04.271 19:17:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:04.271 19:17:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:04.271 19:17:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:04.271 19:17:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:04.271 19:17:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:04.271 19:17:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:04.271 19:17:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.271 19:17:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:04.271 19:17:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:04.271 19:17:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.271 19:17:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:04.271 "name": "raid_bdev1", 00:19:04.271 "uuid": "788966c6-28ee-471e-b1a1-c3fd29fac0f6", 00:19:04.271 "strip_size_kb": 0, 00:19:04.271 "state": "online", 00:19:04.271 "raid_level": "raid1", 00:19:04.271 "superblock": true, 00:19:04.271 "num_base_bdevs": 2, 00:19:04.271 "num_base_bdevs_discovered": 2, 00:19:04.271 "num_base_bdevs_operational": 2, 00:19:04.271 "base_bdevs_list": [ 00:19:04.271 { 00:19:04.271 "name": "spare", 00:19:04.271 "uuid": "bceae672-d23a-5b0e-abb2-b267155b3c0f", 00:19:04.271 "is_configured": true, 00:19:04.271 "data_offset": 256, 00:19:04.271 "data_size": 7936 00:19:04.271 }, 00:19:04.271 { 00:19:04.271 "name": "BaseBdev2", 00:19:04.271 "uuid": "937df7da-58c9-51f7-b668-64aa724e49fc", 00:19:04.271 "is_configured": true, 00:19:04.271 "data_offset": 256, 00:19:04.271 "data_size": 7936 00:19:04.271 } 00:19:04.271 ] 00:19:04.271 }' 00:19:04.271 19:17:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:04.271 19:17:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:04.842 19:17:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:04.842 19:17:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:04.842 19:17:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:04.842 19:17:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:04.842 19:17:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:04.842 19:17:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:04.842 19:17:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:04.842 19:17:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.842 19:17:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:04.842 19:17:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.842 19:17:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:04.842 "name": "raid_bdev1", 00:19:04.842 "uuid": "788966c6-28ee-471e-b1a1-c3fd29fac0f6", 00:19:04.842 "strip_size_kb": 0, 00:19:04.842 "state": "online", 00:19:04.842 "raid_level": "raid1", 00:19:04.842 "superblock": true, 00:19:04.842 "num_base_bdevs": 2, 00:19:04.842 "num_base_bdevs_discovered": 2, 00:19:04.842 "num_base_bdevs_operational": 2, 00:19:04.842 "base_bdevs_list": [ 00:19:04.842 { 00:19:04.842 "name": "spare", 00:19:04.842 "uuid": "bceae672-d23a-5b0e-abb2-b267155b3c0f", 00:19:04.842 "is_configured": true, 00:19:04.842 "data_offset": 256, 00:19:04.842 "data_size": 7936 00:19:04.842 }, 00:19:04.842 { 00:19:04.842 "name": "BaseBdev2", 00:19:04.842 "uuid": "937df7da-58c9-51f7-b668-64aa724e49fc", 00:19:04.842 "is_configured": true, 00:19:04.842 "data_offset": 256, 00:19:04.842 "data_size": 7936 00:19:04.842 } 00:19:04.842 ] 00:19:04.842 }' 00:19:04.842 19:17:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:04.842 19:17:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:04.842 19:17:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:04.842 19:17:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:04.842 19:17:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:19:04.842 19:17:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:04.842 19:17:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.842 19:17:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:04.842 19:17:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.842 19:17:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:19:04.842 19:17:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:04.842 19:17:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.842 19:17:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:04.842 [2024-11-27 19:17:14.414702] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:04.842 19:17:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.842 19:17:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:04.842 19:17:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:04.842 19:17:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:04.842 19:17:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:04.842 19:17:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:04.842 19:17:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:04.842 19:17:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:04.842 19:17:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:04.842 19:17:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:04.842 19:17:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:04.843 19:17:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:04.843 19:17:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:04.843 19:17:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.843 19:17:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:04.843 19:17:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.843 19:17:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:04.843 "name": "raid_bdev1", 00:19:04.843 "uuid": "788966c6-28ee-471e-b1a1-c3fd29fac0f6", 00:19:04.843 "strip_size_kb": 0, 00:19:04.843 "state": "online", 00:19:04.843 "raid_level": "raid1", 00:19:04.843 "superblock": true, 00:19:04.843 "num_base_bdevs": 2, 00:19:04.843 "num_base_bdevs_discovered": 1, 00:19:04.843 "num_base_bdevs_operational": 1, 00:19:04.843 "base_bdevs_list": [ 00:19:04.843 { 00:19:04.843 "name": null, 00:19:04.843 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:04.843 "is_configured": false, 00:19:04.843 "data_offset": 0, 00:19:04.843 "data_size": 7936 00:19:04.843 }, 00:19:04.843 { 00:19:04.843 "name": "BaseBdev2", 00:19:04.843 "uuid": "937df7da-58c9-51f7-b668-64aa724e49fc", 00:19:04.843 "is_configured": true, 00:19:04.843 "data_offset": 256, 00:19:04.843 "data_size": 7936 00:19:04.843 } 00:19:04.843 ] 00:19:04.843 }' 00:19:04.843 19:17:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:04.843 19:17:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:05.412 19:17:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:05.412 19:17:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.412 19:17:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:05.412 [2024-11-27 19:17:14.857917] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:05.412 [2024-11-27 19:17:14.858041] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:19:05.412 [2024-11-27 19:17:14.858057] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:19:05.412 [2024-11-27 19:17:14.858093] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:05.412 [2024-11-27 19:17:14.870501] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:19:05.412 19:17:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.412 19:17:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@757 -- # sleep 1 00:19:05.412 [2024-11-27 19:17:14.872359] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:06.353 19:17:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:06.353 19:17:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:06.353 19:17:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:06.353 19:17:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:06.353 19:17:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:06.353 19:17:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:06.353 19:17:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:06.353 19:17:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.353 19:17:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:06.353 19:17:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.353 19:17:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:06.353 "name": "raid_bdev1", 00:19:06.353 "uuid": "788966c6-28ee-471e-b1a1-c3fd29fac0f6", 00:19:06.353 "strip_size_kb": 0, 00:19:06.353 "state": "online", 00:19:06.353 "raid_level": "raid1", 00:19:06.353 "superblock": true, 00:19:06.353 "num_base_bdevs": 2, 00:19:06.353 "num_base_bdevs_discovered": 2, 00:19:06.353 "num_base_bdevs_operational": 2, 00:19:06.353 "process": { 00:19:06.353 "type": "rebuild", 00:19:06.353 "target": "spare", 00:19:06.353 "progress": { 00:19:06.353 "blocks": 2560, 00:19:06.353 "percent": 32 00:19:06.353 } 00:19:06.353 }, 00:19:06.353 "base_bdevs_list": [ 00:19:06.353 { 00:19:06.353 "name": "spare", 00:19:06.353 "uuid": "bceae672-d23a-5b0e-abb2-b267155b3c0f", 00:19:06.353 "is_configured": true, 00:19:06.353 "data_offset": 256, 00:19:06.353 "data_size": 7936 00:19:06.353 }, 00:19:06.353 { 00:19:06.353 "name": "BaseBdev2", 00:19:06.353 "uuid": "937df7da-58c9-51f7-b668-64aa724e49fc", 00:19:06.353 "is_configured": true, 00:19:06.353 "data_offset": 256, 00:19:06.353 "data_size": 7936 00:19:06.353 } 00:19:06.353 ] 00:19:06.353 }' 00:19:06.353 19:17:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:06.353 19:17:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:06.353 19:17:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:06.613 19:17:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:06.613 19:17:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:19:06.613 19:17:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.613 19:17:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:06.613 [2024-11-27 19:17:16.028716] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:06.613 [2024-11-27 19:17:16.077097] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:06.613 [2024-11-27 19:17:16.077152] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:06.613 [2024-11-27 19:17:16.077164] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:06.613 [2024-11-27 19:17:16.077182] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:06.613 19:17:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.613 19:17:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:06.613 19:17:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:06.613 19:17:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:06.613 19:17:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:06.613 19:17:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:06.613 19:17:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:06.613 19:17:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:06.613 19:17:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:06.613 19:17:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:06.613 19:17:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:06.613 19:17:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:06.613 19:17:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.613 19:17:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:06.614 19:17:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:06.614 19:17:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.614 19:17:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:06.614 "name": "raid_bdev1", 00:19:06.614 "uuid": "788966c6-28ee-471e-b1a1-c3fd29fac0f6", 00:19:06.614 "strip_size_kb": 0, 00:19:06.614 "state": "online", 00:19:06.614 "raid_level": "raid1", 00:19:06.614 "superblock": true, 00:19:06.614 "num_base_bdevs": 2, 00:19:06.614 "num_base_bdevs_discovered": 1, 00:19:06.614 "num_base_bdevs_operational": 1, 00:19:06.614 "base_bdevs_list": [ 00:19:06.614 { 00:19:06.614 "name": null, 00:19:06.614 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:06.614 "is_configured": false, 00:19:06.614 "data_offset": 0, 00:19:06.614 "data_size": 7936 00:19:06.614 }, 00:19:06.614 { 00:19:06.614 "name": "BaseBdev2", 00:19:06.614 "uuid": "937df7da-58c9-51f7-b668-64aa724e49fc", 00:19:06.614 "is_configured": true, 00:19:06.614 "data_offset": 256, 00:19:06.614 "data_size": 7936 00:19:06.614 } 00:19:06.614 ] 00:19:06.614 }' 00:19:06.614 19:17:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:06.614 19:17:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:07.183 19:17:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:07.183 19:17:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.183 19:17:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:07.183 [2024-11-27 19:17:16.547114] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:07.183 [2024-11-27 19:17:16.547170] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:07.183 [2024-11-27 19:17:16.547193] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:19:07.183 [2024-11-27 19:17:16.547204] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:07.183 [2024-11-27 19:17:16.547439] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:07.183 [2024-11-27 19:17:16.547463] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:07.183 [2024-11-27 19:17:16.547511] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:19:07.183 [2024-11-27 19:17:16.547523] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:19:07.183 [2024-11-27 19:17:16.547532] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:19:07.183 [2024-11-27 19:17:16.547554] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:07.183 [2024-11-27 19:17:16.560491] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:19:07.183 spare 00:19:07.183 19:17:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.183 19:17:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@764 -- # sleep 1 00:19:07.183 [2024-11-27 19:17:16.562339] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:08.123 19:17:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:08.123 19:17:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:08.123 19:17:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:08.123 19:17:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:08.123 19:17:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:08.123 19:17:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:08.123 19:17:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:08.123 19:17:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.123 19:17:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:08.123 19:17:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.123 19:17:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:08.123 "name": "raid_bdev1", 00:19:08.123 "uuid": "788966c6-28ee-471e-b1a1-c3fd29fac0f6", 00:19:08.123 "strip_size_kb": 0, 00:19:08.123 "state": "online", 00:19:08.123 "raid_level": "raid1", 00:19:08.123 "superblock": true, 00:19:08.123 "num_base_bdevs": 2, 00:19:08.123 "num_base_bdevs_discovered": 2, 00:19:08.123 "num_base_bdevs_operational": 2, 00:19:08.123 "process": { 00:19:08.123 "type": "rebuild", 00:19:08.123 "target": "spare", 00:19:08.123 "progress": { 00:19:08.123 "blocks": 2560, 00:19:08.123 "percent": 32 00:19:08.123 } 00:19:08.124 }, 00:19:08.124 "base_bdevs_list": [ 00:19:08.124 { 00:19:08.124 "name": "spare", 00:19:08.124 "uuid": "bceae672-d23a-5b0e-abb2-b267155b3c0f", 00:19:08.124 "is_configured": true, 00:19:08.124 "data_offset": 256, 00:19:08.124 "data_size": 7936 00:19:08.124 }, 00:19:08.124 { 00:19:08.124 "name": "BaseBdev2", 00:19:08.124 "uuid": "937df7da-58c9-51f7-b668-64aa724e49fc", 00:19:08.124 "is_configured": true, 00:19:08.124 "data_offset": 256, 00:19:08.124 "data_size": 7936 00:19:08.124 } 00:19:08.124 ] 00:19:08.124 }' 00:19:08.124 19:17:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:08.124 19:17:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:08.124 19:17:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:08.124 19:17:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:08.124 19:17:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:19:08.124 19:17:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.124 19:17:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:08.124 [2024-11-27 19:17:17.726833] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:08.383 [2024-11-27 19:17:17.766688] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:08.384 [2024-11-27 19:17:17.766750] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:08.384 [2024-11-27 19:17:17.766765] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:08.384 [2024-11-27 19:17:17.766772] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:08.384 19:17:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.384 19:17:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:08.384 19:17:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:08.384 19:17:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:08.384 19:17:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:08.384 19:17:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:08.384 19:17:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:08.384 19:17:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:08.384 19:17:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:08.384 19:17:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:08.384 19:17:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:08.384 19:17:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:08.384 19:17:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:08.384 19:17:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.384 19:17:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:08.384 19:17:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.384 19:17:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:08.384 "name": "raid_bdev1", 00:19:08.384 "uuid": "788966c6-28ee-471e-b1a1-c3fd29fac0f6", 00:19:08.384 "strip_size_kb": 0, 00:19:08.384 "state": "online", 00:19:08.384 "raid_level": "raid1", 00:19:08.384 "superblock": true, 00:19:08.384 "num_base_bdevs": 2, 00:19:08.384 "num_base_bdevs_discovered": 1, 00:19:08.384 "num_base_bdevs_operational": 1, 00:19:08.384 "base_bdevs_list": [ 00:19:08.384 { 00:19:08.384 "name": null, 00:19:08.384 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:08.384 "is_configured": false, 00:19:08.384 "data_offset": 0, 00:19:08.384 "data_size": 7936 00:19:08.384 }, 00:19:08.384 { 00:19:08.384 "name": "BaseBdev2", 00:19:08.384 "uuid": "937df7da-58c9-51f7-b668-64aa724e49fc", 00:19:08.384 "is_configured": true, 00:19:08.384 "data_offset": 256, 00:19:08.384 "data_size": 7936 00:19:08.384 } 00:19:08.384 ] 00:19:08.384 }' 00:19:08.384 19:17:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:08.384 19:17:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:08.645 19:17:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:08.645 19:17:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:08.645 19:17:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:08.645 19:17:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:08.645 19:17:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:08.645 19:17:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:08.645 19:17:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:08.645 19:17:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.645 19:17:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:08.645 19:17:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.905 19:17:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:08.905 "name": "raid_bdev1", 00:19:08.905 "uuid": "788966c6-28ee-471e-b1a1-c3fd29fac0f6", 00:19:08.905 "strip_size_kb": 0, 00:19:08.905 "state": "online", 00:19:08.905 "raid_level": "raid1", 00:19:08.905 "superblock": true, 00:19:08.905 "num_base_bdevs": 2, 00:19:08.905 "num_base_bdevs_discovered": 1, 00:19:08.905 "num_base_bdevs_operational": 1, 00:19:08.905 "base_bdevs_list": [ 00:19:08.905 { 00:19:08.905 "name": null, 00:19:08.905 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:08.905 "is_configured": false, 00:19:08.905 "data_offset": 0, 00:19:08.905 "data_size": 7936 00:19:08.905 }, 00:19:08.905 { 00:19:08.905 "name": "BaseBdev2", 00:19:08.905 "uuid": "937df7da-58c9-51f7-b668-64aa724e49fc", 00:19:08.905 "is_configured": true, 00:19:08.905 "data_offset": 256, 00:19:08.905 "data_size": 7936 00:19:08.905 } 00:19:08.905 ] 00:19:08.905 }' 00:19:08.905 19:17:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:08.905 19:17:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:08.905 19:17:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:08.905 19:17:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:08.905 19:17:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:19:08.905 19:17:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.905 19:17:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:08.905 19:17:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.905 19:17:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:08.905 19:17:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.905 19:17:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:08.905 [2024-11-27 19:17:18.408185] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:08.905 [2024-11-27 19:17:18.408306] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:08.905 [2024-11-27 19:17:18.408331] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:19:08.905 [2024-11-27 19:17:18.408340] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:08.905 [2024-11-27 19:17:18.408547] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:08.905 [2024-11-27 19:17:18.408560] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:08.906 [2024-11-27 19:17:18.408605] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:19:08.906 [2024-11-27 19:17:18.408617] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:19:08.906 [2024-11-27 19:17:18.408627] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:19:08.906 [2024-11-27 19:17:18.408635] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:19:08.906 BaseBdev1 00:19:08.906 19:17:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.906 19:17:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@775 -- # sleep 1 00:19:09.846 19:17:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:09.846 19:17:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:09.846 19:17:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:09.846 19:17:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:09.846 19:17:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:09.846 19:17:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:09.846 19:17:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:09.846 19:17:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:09.846 19:17:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:09.846 19:17:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:09.846 19:17:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:09.846 19:17:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:09.846 19:17:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.846 19:17:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:09.846 19:17:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.846 19:17:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:09.846 "name": "raid_bdev1", 00:19:09.846 "uuid": "788966c6-28ee-471e-b1a1-c3fd29fac0f6", 00:19:09.846 "strip_size_kb": 0, 00:19:09.846 "state": "online", 00:19:09.846 "raid_level": "raid1", 00:19:09.846 "superblock": true, 00:19:09.846 "num_base_bdevs": 2, 00:19:09.846 "num_base_bdevs_discovered": 1, 00:19:09.846 "num_base_bdevs_operational": 1, 00:19:09.846 "base_bdevs_list": [ 00:19:09.846 { 00:19:09.846 "name": null, 00:19:09.846 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:09.846 "is_configured": false, 00:19:09.846 "data_offset": 0, 00:19:09.846 "data_size": 7936 00:19:09.846 }, 00:19:09.846 { 00:19:09.846 "name": "BaseBdev2", 00:19:09.846 "uuid": "937df7da-58c9-51f7-b668-64aa724e49fc", 00:19:09.846 "is_configured": true, 00:19:09.846 "data_offset": 256, 00:19:09.846 "data_size": 7936 00:19:09.846 } 00:19:09.846 ] 00:19:09.846 }' 00:19:09.846 19:17:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:09.846 19:17:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:10.417 19:17:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:10.417 19:17:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:10.417 19:17:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:10.417 19:17:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:10.417 19:17:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:10.417 19:17:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:10.417 19:17:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:10.417 19:17:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.417 19:17:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:10.417 19:17:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.417 19:17:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:10.417 "name": "raid_bdev1", 00:19:10.417 "uuid": "788966c6-28ee-471e-b1a1-c3fd29fac0f6", 00:19:10.417 "strip_size_kb": 0, 00:19:10.417 "state": "online", 00:19:10.417 "raid_level": "raid1", 00:19:10.417 "superblock": true, 00:19:10.417 "num_base_bdevs": 2, 00:19:10.417 "num_base_bdevs_discovered": 1, 00:19:10.417 "num_base_bdevs_operational": 1, 00:19:10.417 "base_bdevs_list": [ 00:19:10.417 { 00:19:10.417 "name": null, 00:19:10.417 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:10.417 "is_configured": false, 00:19:10.417 "data_offset": 0, 00:19:10.417 "data_size": 7936 00:19:10.417 }, 00:19:10.417 { 00:19:10.417 "name": "BaseBdev2", 00:19:10.417 "uuid": "937df7da-58c9-51f7-b668-64aa724e49fc", 00:19:10.417 "is_configured": true, 00:19:10.417 "data_offset": 256, 00:19:10.417 "data_size": 7936 00:19:10.417 } 00:19:10.417 ] 00:19:10.417 }' 00:19:10.417 19:17:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:10.417 19:17:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:10.417 19:17:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:10.417 19:17:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:10.417 19:17:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:10.417 19:17:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@652 -- # local es=0 00:19:10.417 19:17:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:10.417 19:17:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:19:10.417 19:17:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:10.417 19:17:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:19:10.417 19:17:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:10.417 19:17:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:10.417 19:17:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.417 19:17:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:10.417 [2024-11-27 19:17:19.997836] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:10.417 [2024-11-27 19:17:19.997997] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:19:10.417 [2024-11-27 19:17:19.998070] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:19:10.417 request: 00:19:10.417 { 00:19:10.417 "base_bdev": "BaseBdev1", 00:19:10.417 "raid_bdev": "raid_bdev1", 00:19:10.417 "method": "bdev_raid_add_base_bdev", 00:19:10.417 "req_id": 1 00:19:10.417 } 00:19:10.417 Got JSON-RPC error response 00:19:10.417 response: 00:19:10.417 { 00:19:10.417 "code": -22, 00:19:10.417 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:19:10.417 } 00:19:10.417 19:17:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:19:10.417 19:17:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@655 -- # es=1 00:19:10.417 19:17:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:10.417 19:17:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:10.417 19:17:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:10.417 19:17:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@779 -- # sleep 1 00:19:11.803 19:17:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:11.803 19:17:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:11.803 19:17:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:11.803 19:17:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:11.803 19:17:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:11.803 19:17:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:11.803 19:17:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:11.803 19:17:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:11.803 19:17:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:11.803 19:17:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:11.803 19:17:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:11.803 19:17:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:11.803 19:17:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.803 19:17:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:11.803 19:17:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.803 19:17:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:11.803 "name": "raid_bdev1", 00:19:11.803 "uuid": "788966c6-28ee-471e-b1a1-c3fd29fac0f6", 00:19:11.803 "strip_size_kb": 0, 00:19:11.803 "state": "online", 00:19:11.803 "raid_level": "raid1", 00:19:11.803 "superblock": true, 00:19:11.803 "num_base_bdevs": 2, 00:19:11.803 "num_base_bdevs_discovered": 1, 00:19:11.803 "num_base_bdevs_operational": 1, 00:19:11.803 "base_bdevs_list": [ 00:19:11.803 { 00:19:11.803 "name": null, 00:19:11.803 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:11.803 "is_configured": false, 00:19:11.803 "data_offset": 0, 00:19:11.803 "data_size": 7936 00:19:11.803 }, 00:19:11.803 { 00:19:11.803 "name": "BaseBdev2", 00:19:11.803 "uuid": "937df7da-58c9-51f7-b668-64aa724e49fc", 00:19:11.803 "is_configured": true, 00:19:11.803 "data_offset": 256, 00:19:11.803 "data_size": 7936 00:19:11.803 } 00:19:11.803 ] 00:19:11.803 }' 00:19:11.803 19:17:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:11.803 19:17:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:12.079 19:17:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:12.079 19:17:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:12.079 19:17:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:12.079 19:17:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:12.079 19:17:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:12.079 19:17:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:12.079 19:17:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:12.079 19:17:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.079 19:17:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:12.079 19:17:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.079 19:17:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:12.079 "name": "raid_bdev1", 00:19:12.079 "uuid": "788966c6-28ee-471e-b1a1-c3fd29fac0f6", 00:19:12.079 "strip_size_kb": 0, 00:19:12.079 "state": "online", 00:19:12.079 "raid_level": "raid1", 00:19:12.079 "superblock": true, 00:19:12.079 "num_base_bdevs": 2, 00:19:12.079 "num_base_bdevs_discovered": 1, 00:19:12.079 "num_base_bdevs_operational": 1, 00:19:12.079 "base_bdevs_list": [ 00:19:12.079 { 00:19:12.079 "name": null, 00:19:12.079 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:12.079 "is_configured": false, 00:19:12.079 "data_offset": 0, 00:19:12.079 "data_size": 7936 00:19:12.079 }, 00:19:12.079 { 00:19:12.079 "name": "BaseBdev2", 00:19:12.079 "uuid": "937df7da-58c9-51f7-b668-64aa724e49fc", 00:19:12.079 "is_configured": true, 00:19:12.079 "data_offset": 256, 00:19:12.079 "data_size": 7936 00:19:12.079 } 00:19:12.079 ] 00:19:12.079 }' 00:19:12.079 19:17:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:12.080 19:17:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:12.080 19:17:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:12.080 19:17:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:12.080 19:17:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@784 -- # killprocess 87832 00:19:12.080 19:17:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@954 -- # '[' -z 87832 ']' 00:19:12.080 19:17:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@958 -- # kill -0 87832 00:19:12.080 19:17:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@959 -- # uname 00:19:12.080 19:17:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:12.080 19:17:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87832 00:19:12.080 19:17:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:12.080 19:17:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:12.080 killing process with pid 87832 00:19:12.080 Received shutdown signal, test time was about 60.000000 seconds 00:19:12.080 00:19:12.080 Latency(us) 00:19:12.080 [2024-11-27T19:17:21.716Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:12.080 [2024-11-27T19:17:21.716Z] =================================================================================================================== 00:19:12.080 [2024-11-27T19:17:21.716Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:12.080 19:17:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87832' 00:19:12.080 19:17:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@973 -- # kill 87832 00:19:12.080 [2024-11-27 19:17:21.604652] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:12.080 [2024-11-27 19:17:21.604755] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:12.080 [2024-11-27 19:17:21.604793] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:12.080 [2024-11-27 19:17:21.604803] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:19:12.080 19:17:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@978 -- # wait 87832 00:19:12.355 [2024-11-27 19:17:21.900393] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:13.297 19:17:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@786 -- # return 0 00:19:13.297 00:19:13.297 real 0m19.517s 00:19:13.297 user 0m25.478s 00:19:13.297 sys 0m2.641s 00:19:13.297 ************************************ 00:19:13.297 END TEST raid_rebuild_test_sb_md_separate 00:19:13.297 ************************************ 00:19:13.297 19:17:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:13.297 19:17:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:13.558 19:17:22 bdev_raid -- bdev/bdev_raid.sh@1010 -- # base_malloc_params='-m 32 -i' 00:19:13.558 19:17:22 bdev_raid -- bdev/bdev_raid.sh@1011 -- # run_test raid_state_function_test_sb_md_interleaved raid_state_function_test raid1 2 true 00:19:13.558 19:17:22 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:19:13.558 19:17:22 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:13.558 19:17:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:13.558 ************************************ 00:19:13.558 START TEST raid_state_function_test_sb_md_interleaved 00:19:13.558 ************************************ 00:19:13.558 19:17:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:19:13.558 19:17:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:19:13.558 19:17:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:19:13.558 19:17:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:19:13.558 19:17:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:19:13.558 19:17:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:19:13.558 19:17:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:13.558 19:17:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:19:13.558 19:17:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:13.558 19:17:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:13.558 19:17:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:19:13.558 19:17:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:13.558 19:17:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:13.558 19:17:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:19:13.558 19:17:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:19:13.558 19:17:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:19:13.558 19:17:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # local strip_size 00:19:13.558 19:17:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:19:13.558 19:17:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:19:13.558 19:17:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:19:13.558 19:17:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:19:13.558 19:17:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:19:13.558 19:17:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:19:13.558 Process raid pid: 88524 00:19:13.558 19:17:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@229 -- # raid_pid=88524 00:19:13.558 19:17:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:19:13.558 19:17:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 88524' 00:19:13.558 19:17:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@231 -- # waitforlisten 88524 00:19:13.558 19:17:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 88524 ']' 00:19:13.558 19:17:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:13.558 19:17:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:13.558 19:17:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:13.558 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:13.558 19:17:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:13.558 19:17:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:13.558 [2024-11-27 19:17:23.120333] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:19:13.558 [2024-11-27 19:17:23.120487] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:13.818 [2024-11-27 19:17:23.300643] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:13.818 [2024-11-27 19:17:23.407746] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:14.078 [2024-11-27 19:17:23.572500] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:14.078 [2024-11-27 19:17:23.572536] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:14.339 19:17:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:14.339 19:17:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:19:14.339 19:17:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:19:14.339 19:17:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.339 19:17:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:14.339 [2024-11-27 19:17:23.931787] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:14.339 [2024-11-27 19:17:23.931845] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:14.339 [2024-11-27 19:17:23.931855] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:14.339 [2024-11-27 19:17:23.931873] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:14.339 19:17:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.339 19:17:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:19:14.339 19:17:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:14.339 19:17:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:14.339 19:17:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:14.339 19:17:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:14.339 19:17:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:14.339 19:17:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:14.339 19:17:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:14.339 19:17:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:14.339 19:17:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:14.339 19:17:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:14.339 19:17:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.339 19:17:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:14.339 19:17:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:14.339 19:17:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.599 19:17:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:14.599 "name": "Existed_Raid", 00:19:14.599 "uuid": "077bc151-67e1-45e7-a4aa-954c54a7e57f", 00:19:14.599 "strip_size_kb": 0, 00:19:14.599 "state": "configuring", 00:19:14.599 "raid_level": "raid1", 00:19:14.599 "superblock": true, 00:19:14.599 "num_base_bdevs": 2, 00:19:14.599 "num_base_bdevs_discovered": 0, 00:19:14.599 "num_base_bdevs_operational": 2, 00:19:14.599 "base_bdevs_list": [ 00:19:14.599 { 00:19:14.599 "name": "BaseBdev1", 00:19:14.599 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:14.599 "is_configured": false, 00:19:14.599 "data_offset": 0, 00:19:14.599 "data_size": 0 00:19:14.599 }, 00:19:14.599 { 00:19:14.599 "name": "BaseBdev2", 00:19:14.599 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:14.599 "is_configured": false, 00:19:14.599 "data_offset": 0, 00:19:14.599 "data_size": 0 00:19:14.599 } 00:19:14.599 ] 00:19:14.599 }' 00:19:14.599 19:17:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:14.599 19:17:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:14.860 19:17:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:14.860 19:17:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.860 19:17:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:14.860 [2024-11-27 19:17:24.366927] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:14.860 [2024-11-27 19:17:24.367020] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:19:14.860 19:17:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.860 19:17:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:19:14.860 19:17:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.860 19:17:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:14.860 [2024-11-27 19:17:24.378905] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:14.860 [2024-11-27 19:17:24.378988] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:14.860 [2024-11-27 19:17:24.379013] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:14.860 [2024-11-27 19:17:24.379036] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:14.860 19:17:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.860 19:17:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1 00:19:14.860 19:17:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.860 19:17:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:14.860 [2024-11-27 19:17:24.427168] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:14.860 BaseBdev1 00:19:14.860 19:17:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.860 19:17:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:19:14.860 19:17:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:19:14.860 19:17:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:14.860 19:17:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # local i 00:19:14.860 19:17:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:14.860 19:17:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:14.860 19:17:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:14.860 19:17:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.860 19:17:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:14.860 19:17:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.860 19:17:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:14.860 19:17:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.860 19:17:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:14.860 [ 00:19:14.860 { 00:19:14.860 "name": "BaseBdev1", 00:19:14.860 "aliases": [ 00:19:14.860 "0f78e273-daa8-480d-b6bb-4580177f70e5" 00:19:14.860 ], 00:19:14.860 "product_name": "Malloc disk", 00:19:14.860 "block_size": 4128, 00:19:14.860 "num_blocks": 8192, 00:19:14.860 "uuid": "0f78e273-daa8-480d-b6bb-4580177f70e5", 00:19:14.860 "md_size": 32, 00:19:14.860 "md_interleave": true, 00:19:14.860 "dif_type": 0, 00:19:14.860 "assigned_rate_limits": { 00:19:14.860 "rw_ios_per_sec": 0, 00:19:14.860 "rw_mbytes_per_sec": 0, 00:19:14.860 "r_mbytes_per_sec": 0, 00:19:14.860 "w_mbytes_per_sec": 0 00:19:14.860 }, 00:19:14.860 "claimed": true, 00:19:14.860 "claim_type": "exclusive_write", 00:19:14.860 "zoned": false, 00:19:14.860 "supported_io_types": { 00:19:14.860 "read": true, 00:19:14.860 "write": true, 00:19:14.860 "unmap": true, 00:19:14.860 "flush": true, 00:19:14.860 "reset": true, 00:19:14.860 "nvme_admin": false, 00:19:14.860 "nvme_io": false, 00:19:14.860 "nvme_io_md": false, 00:19:14.860 "write_zeroes": true, 00:19:14.860 "zcopy": true, 00:19:14.860 "get_zone_info": false, 00:19:14.860 "zone_management": false, 00:19:14.860 "zone_append": false, 00:19:14.860 "compare": false, 00:19:14.860 "compare_and_write": false, 00:19:14.860 "abort": true, 00:19:14.860 "seek_hole": false, 00:19:14.860 "seek_data": false, 00:19:14.860 "copy": true, 00:19:14.860 "nvme_iov_md": false 00:19:14.860 }, 00:19:14.860 "memory_domains": [ 00:19:14.860 { 00:19:14.860 "dma_device_id": "system", 00:19:14.860 "dma_device_type": 1 00:19:14.860 }, 00:19:14.860 { 00:19:14.860 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:14.860 "dma_device_type": 2 00:19:14.860 } 00:19:14.860 ], 00:19:14.860 "driver_specific": {} 00:19:14.860 } 00:19:14.860 ] 00:19:14.860 19:17:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.860 19:17:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@911 -- # return 0 00:19:14.860 19:17:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:19:14.860 19:17:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:14.860 19:17:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:14.860 19:17:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:14.860 19:17:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:14.860 19:17:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:14.860 19:17:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:14.860 19:17:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:14.860 19:17:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:14.860 19:17:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:14.860 19:17:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:14.860 19:17:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:14.860 19:17:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.860 19:17:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:14.860 19:17:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.120 19:17:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:15.120 "name": "Existed_Raid", 00:19:15.120 "uuid": "4a8ac178-dead-4a86-8f5b-0547f26165c5", 00:19:15.120 "strip_size_kb": 0, 00:19:15.120 "state": "configuring", 00:19:15.120 "raid_level": "raid1", 00:19:15.120 "superblock": true, 00:19:15.120 "num_base_bdevs": 2, 00:19:15.120 "num_base_bdevs_discovered": 1, 00:19:15.120 "num_base_bdevs_operational": 2, 00:19:15.120 "base_bdevs_list": [ 00:19:15.120 { 00:19:15.120 "name": "BaseBdev1", 00:19:15.120 "uuid": "0f78e273-daa8-480d-b6bb-4580177f70e5", 00:19:15.120 "is_configured": true, 00:19:15.120 "data_offset": 256, 00:19:15.120 "data_size": 7936 00:19:15.120 }, 00:19:15.120 { 00:19:15.120 "name": "BaseBdev2", 00:19:15.120 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:15.120 "is_configured": false, 00:19:15.120 "data_offset": 0, 00:19:15.120 "data_size": 0 00:19:15.120 } 00:19:15.120 ] 00:19:15.120 }' 00:19:15.120 19:17:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:15.120 19:17:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:15.380 19:17:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:15.380 19:17:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.380 19:17:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:15.380 [2024-11-27 19:17:24.926350] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:15.380 [2024-11-27 19:17:24.926388] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:19:15.380 19:17:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.380 19:17:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:19:15.380 19:17:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.380 19:17:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:15.380 [2024-11-27 19:17:24.938381] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:15.380 [2024-11-27 19:17:24.940043] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:15.380 [2024-11-27 19:17:24.940079] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:15.380 19:17:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.380 19:17:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:19:15.380 19:17:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:15.380 19:17:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:19:15.380 19:17:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:15.380 19:17:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:15.380 19:17:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:15.380 19:17:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:15.380 19:17:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:15.380 19:17:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:15.380 19:17:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:15.380 19:17:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:15.380 19:17:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:15.380 19:17:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:15.380 19:17:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:15.380 19:17:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.380 19:17:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:15.380 19:17:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.380 19:17:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:15.380 "name": "Existed_Raid", 00:19:15.380 "uuid": "4d27810c-2c9f-4467-8839-c2da81747397", 00:19:15.380 "strip_size_kb": 0, 00:19:15.380 "state": "configuring", 00:19:15.380 "raid_level": "raid1", 00:19:15.380 "superblock": true, 00:19:15.380 "num_base_bdevs": 2, 00:19:15.380 "num_base_bdevs_discovered": 1, 00:19:15.380 "num_base_bdevs_operational": 2, 00:19:15.380 "base_bdevs_list": [ 00:19:15.380 { 00:19:15.380 "name": "BaseBdev1", 00:19:15.380 "uuid": "0f78e273-daa8-480d-b6bb-4580177f70e5", 00:19:15.380 "is_configured": true, 00:19:15.380 "data_offset": 256, 00:19:15.380 "data_size": 7936 00:19:15.380 }, 00:19:15.380 { 00:19:15.380 "name": "BaseBdev2", 00:19:15.380 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:15.380 "is_configured": false, 00:19:15.380 "data_offset": 0, 00:19:15.380 "data_size": 0 00:19:15.380 } 00:19:15.380 ] 00:19:15.380 }' 00:19:15.380 19:17:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:15.380 19:17:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:15.952 19:17:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2 00:19:15.952 19:17:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.952 19:17:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:15.952 [2024-11-27 19:17:25.431954] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:15.952 [2024-11-27 19:17:25.432241] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:19:15.952 [2024-11-27 19:17:25.432278] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:19:15.952 [2024-11-27 19:17:25.432387] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:19:15.952 [2024-11-27 19:17:25.432487] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:19:15.952 [2024-11-27 19:17:25.432523] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:19:15.952 [2024-11-27 19:17:25.432616] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:15.952 BaseBdev2 00:19:15.952 19:17:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.952 19:17:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:19:15.952 19:17:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:19:15.952 19:17:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:15.952 19:17:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # local i 00:19:15.952 19:17:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:15.952 19:17:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:15.952 19:17:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:15.952 19:17:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.952 19:17:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:15.952 19:17:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.952 19:17:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:15.952 19:17:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.952 19:17:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:15.952 [ 00:19:15.952 { 00:19:15.952 "name": "BaseBdev2", 00:19:15.952 "aliases": [ 00:19:15.952 "bb7bfc49-068f-4db2-b303-5ca1f92489c8" 00:19:15.952 ], 00:19:15.952 "product_name": "Malloc disk", 00:19:15.952 "block_size": 4128, 00:19:15.952 "num_blocks": 8192, 00:19:15.952 "uuid": "bb7bfc49-068f-4db2-b303-5ca1f92489c8", 00:19:15.952 "md_size": 32, 00:19:15.952 "md_interleave": true, 00:19:15.952 "dif_type": 0, 00:19:15.952 "assigned_rate_limits": { 00:19:15.952 "rw_ios_per_sec": 0, 00:19:15.952 "rw_mbytes_per_sec": 0, 00:19:15.952 "r_mbytes_per_sec": 0, 00:19:15.952 "w_mbytes_per_sec": 0 00:19:15.952 }, 00:19:15.952 "claimed": true, 00:19:15.952 "claim_type": "exclusive_write", 00:19:15.952 "zoned": false, 00:19:15.952 "supported_io_types": { 00:19:15.952 "read": true, 00:19:15.952 "write": true, 00:19:15.952 "unmap": true, 00:19:15.952 "flush": true, 00:19:15.952 "reset": true, 00:19:15.952 "nvme_admin": false, 00:19:15.952 "nvme_io": false, 00:19:15.952 "nvme_io_md": false, 00:19:15.952 "write_zeroes": true, 00:19:15.952 "zcopy": true, 00:19:15.952 "get_zone_info": false, 00:19:15.952 "zone_management": false, 00:19:15.952 "zone_append": false, 00:19:15.952 "compare": false, 00:19:15.952 "compare_and_write": false, 00:19:15.952 "abort": true, 00:19:15.952 "seek_hole": false, 00:19:15.952 "seek_data": false, 00:19:15.952 "copy": true, 00:19:15.952 "nvme_iov_md": false 00:19:15.952 }, 00:19:15.952 "memory_domains": [ 00:19:15.952 { 00:19:15.952 "dma_device_id": "system", 00:19:15.952 "dma_device_type": 1 00:19:15.952 }, 00:19:15.952 { 00:19:15.952 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:15.952 "dma_device_type": 2 00:19:15.952 } 00:19:15.952 ], 00:19:15.952 "driver_specific": {} 00:19:15.952 } 00:19:15.952 ] 00:19:15.952 19:17:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.952 19:17:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@911 -- # return 0 00:19:15.952 19:17:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:19:15.952 19:17:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:15.952 19:17:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:19:15.952 19:17:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:15.952 19:17:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:15.952 19:17:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:15.952 19:17:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:15.952 19:17:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:15.952 19:17:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:15.952 19:17:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:15.952 19:17:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:15.952 19:17:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:15.952 19:17:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:15.952 19:17:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:15.952 19:17:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.952 19:17:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:15.952 19:17:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.952 19:17:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:15.952 "name": "Existed_Raid", 00:19:15.952 "uuid": "4d27810c-2c9f-4467-8839-c2da81747397", 00:19:15.952 "strip_size_kb": 0, 00:19:15.952 "state": "online", 00:19:15.952 "raid_level": "raid1", 00:19:15.952 "superblock": true, 00:19:15.952 "num_base_bdevs": 2, 00:19:15.952 "num_base_bdevs_discovered": 2, 00:19:15.952 "num_base_bdevs_operational": 2, 00:19:15.952 "base_bdevs_list": [ 00:19:15.952 { 00:19:15.952 "name": "BaseBdev1", 00:19:15.952 "uuid": "0f78e273-daa8-480d-b6bb-4580177f70e5", 00:19:15.952 "is_configured": true, 00:19:15.952 "data_offset": 256, 00:19:15.952 "data_size": 7936 00:19:15.952 }, 00:19:15.952 { 00:19:15.952 "name": "BaseBdev2", 00:19:15.952 "uuid": "bb7bfc49-068f-4db2-b303-5ca1f92489c8", 00:19:15.952 "is_configured": true, 00:19:15.952 "data_offset": 256, 00:19:15.952 "data_size": 7936 00:19:15.952 } 00:19:15.952 ] 00:19:15.952 }' 00:19:15.952 19:17:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:15.952 19:17:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:16.521 19:17:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:19:16.521 19:17:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:19:16.522 19:17:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:16.522 19:17:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:16.522 19:17:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:19:16.522 19:17:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:16.522 19:17:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:19:16.522 19:17:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:16.522 19:17:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.522 19:17:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:16.522 [2024-11-27 19:17:25.907425] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:16.522 19:17:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.522 19:17:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:16.522 "name": "Existed_Raid", 00:19:16.522 "aliases": [ 00:19:16.522 "4d27810c-2c9f-4467-8839-c2da81747397" 00:19:16.522 ], 00:19:16.522 "product_name": "Raid Volume", 00:19:16.522 "block_size": 4128, 00:19:16.522 "num_blocks": 7936, 00:19:16.522 "uuid": "4d27810c-2c9f-4467-8839-c2da81747397", 00:19:16.522 "md_size": 32, 00:19:16.522 "md_interleave": true, 00:19:16.522 "dif_type": 0, 00:19:16.522 "assigned_rate_limits": { 00:19:16.522 "rw_ios_per_sec": 0, 00:19:16.522 "rw_mbytes_per_sec": 0, 00:19:16.522 "r_mbytes_per_sec": 0, 00:19:16.522 "w_mbytes_per_sec": 0 00:19:16.522 }, 00:19:16.522 "claimed": false, 00:19:16.522 "zoned": false, 00:19:16.522 "supported_io_types": { 00:19:16.522 "read": true, 00:19:16.522 "write": true, 00:19:16.522 "unmap": false, 00:19:16.522 "flush": false, 00:19:16.522 "reset": true, 00:19:16.522 "nvme_admin": false, 00:19:16.522 "nvme_io": false, 00:19:16.522 "nvme_io_md": false, 00:19:16.522 "write_zeroes": true, 00:19:16.522 "zcopy": false, 00:19:16.522 "get_zone_info": false, 00:19:16.522 "zone_management": false, 00:19:16.522 "zone_append": false, 00:19:16.522 "compare": false, 00:19:16.522 "compare_and_write": false, 00:19:16.522 "abort": false, 00:19:16.522 "seek_hole": false, 00:19:16.522 "seek_data": false, 00:19:16.522 "copy": false, 00:19:16.522 "nvme_iov_md": false 00:19:16.522 }, 00:19:16.522 "memory_domains": [ 00:19:16.522 { 00:19:16.522 "dma_device_id": "system", 00:19:16.522 "dma_device_type": 1 00:19:16.522 }, 00:19:16.522 { 00:19:16.522 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:16.522 "dma_device_type": 2 00:19:16.522 }, 00:19:16.522 { 00:19:16.522 "dma_device_id": "system", 00:19:16.522 "dma_device_type": 1 00:19:16.522 }, 00:19:16.522 { 00:19:16.522 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:16.522 "dma_device_type": 2 00:19:16.522 } 00:19:16.522 ], 00:19:16.522 "driver_specific": { 00:19:16.522 "raid": { 00:19:16.522 "uuid": "4d27810c-2c9f-4467-8839-c2da81747397", 00:19:16.522 "strip_size_kb": 0, 00:19:16.522 "state": "online", 00:19:16.522 "raid_level": "raid1", 00:19:16.522 "superblock": true, 00:19:16.522 "num_base_bdevs": 2, 00:19:16.522 "num_base_bdevs_discovered": 2, 00:19:16.522 "num_base_bdevs_operational": 2, 00:19:16.522 "base_bdevs_list": [ 00:19:16.522 { 00:19:16.522 "name": "BaseBdev1", 00:19:16.522 "uuid": "0f78e273-daa8-480d-b6bb-4580177f70e5", 00:19:16.522 "is_configured": true, 00:19:16.522 "data_offset": 256, 00:19:16.522 "data_size": 7936 00:19:16.522 }, 00:19:16.522 { 00:19:16.522 "name": "BaseBdev2", 00:19:16.522 "uuid": "bb7bfc49-068f-4db2-b303-5ca1f92489c8", 00:19:16.522 "is_configured": true, 00:19:16.522 "data_offset": 256, 00:19:16.522 "data_size": 7936 00:19:16.522 } 00:19:16.522 ] 00:19:16.522 } 00:19:16.522 } 00:19:16.522 }' 00:19:16.522 19:17:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:16.522 19:17:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:19:16.522 BaseBdev2' 00:19:16.522 19:17:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:16.522 19:17:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:19:16.522 19:17:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:16.522 19:17:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:19:16.522 19:17:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:16.522 19:17:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.522 19:17:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:16.522 19:17:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.522 19:17:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:19:16.522 19:17:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:19:16.522 19:17:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:16.522 19:17:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:19:16.522 19:17:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:16.522 19:17:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.522 19:17:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:16.522 19:17:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.522 19:17:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:19:16.522 19:17:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:19:16.522 19:17:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:19:16.522 19:17:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.522 19:17:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:16.522 [2024-11-27 19:17:26.150810] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:16.781 19:17:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.781 19:17:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@260 -- # local expected_state 00:19:16.781 19:17:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:19:16.781 19:17:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:19:16.781 19:17:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:19:16.781 19:17:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:19:16.781 19:17:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:19:16.781 19:17:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:16.781 19:17:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:16.781 19:17:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:16.781 19:17:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:16.781 19:17:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:16.781 19:17:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:16.781 19:17:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:16.781 19:17:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:16.781 19:17:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:16.781 19:17:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:16.781 19:17:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:16.781 19:17:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.781 19:17:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:16.781 19:17:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.781 19:17:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:16.781 "name": "Existed_Raid", 00:19:16.781 "uuid": "4d27810c-2c9f-4467-8839-c2da81747397", 00:19:16.781 "strip_size_kb": 0, 00:19:16.781 "state": "online", 00:19:16.781 "raid_level": "raid1", 00:19:16.781 "superblock": true, 00:19:16.781 "num_base_bdevs": 2, 00:19:16.781 "num_base_bdevs_discovered": 1, 00:19:16.781 "num_base_bdevs_operational": 1, 00:19:16.781 "base_bdevs_list": [ 00:19:16.781 { 00:19:16.781 "name": null, 00:19:16.781 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:16.781 "is_configured": false, 00:19:16.781 "data_offset": 0, 00:19:16.781 "data_size": 7936 00:19:16.781 }, 00:19:16.781 { 00:19:16.781 "name": "BaseBdev2", 00:19:16.781 "uuid": "bb7bfc49-068f-4db2-b303-5ca1f92489c8", 00:19:16.781 "is_configured": true, 00:19:16.781 "data_offset": 256, 00:19:16.781 "data_size": 7936 00:19:16.781 } 00:19:16.781 ] 00:19:16.781 }' 00:19:16.781 19:17:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:16.781 19:17:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:17.042 19:17:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:19:17.042 19:17:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:17.042 19:17:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:19:17.042 19:17:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:17.042 19:17:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.042 19:17:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:17.302 19:17:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.302 19:17:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:19:17.302 19:17:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:17.302 19:17:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:19:17.302 19:17:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.302 19:17:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:17.302 [2024-11-27 19:17:26.692459] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:17.302 [2024-11-27 19:17:26.692557] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:17.302 [2024-11-27 19:17:26.781384] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:17.302 [2024-11-27 19:17:26.781435] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:17.302 [2024-11-27 19:17:26.781447] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:19:17.302 19:17:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.302 19:17:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:19:17.302 19:17:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:17.302 19:17:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:17.302 19:17:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:19:17.302 19:17:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.302 19:17:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:17.302 19:17:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.302 19:17:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:19:17.302 19:17:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:19:17.302 19:17:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:19:17.302 19:17:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@326 -- # killprocess 88524 00:19:17.302 19:17:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 88524 ']' 00:19:17.302 19:17:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 88524 00:19:17.302 19:17:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:19:17.302 19:17:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:17.302 19:17:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88524 00:19:17.302 killing process with pid 88524 00:19:17.302 19:17:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:17.302 19:17:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:17.302 19:17:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88524' 00:19:17.302 19:17:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@973 -- # kill 88524 00:19:17.302 [2024-11-27 19:17:26.865030] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:17.302 19:17:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@978 -- # wait 88524 00:19:17.302 [2024-11-27 19:17:26.880984] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:18.682 19:17:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@328 -- # return 0 00:19:18.682 00:19:18.682 real 0m4.927s 00:19:18.682 user 0m7.081s 00:19:18.682 sys 0m0.918s 00:19:18.682 19:17:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:18.682 19:17:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:18.682 ************************************ 00:19:18.682 END TEST raid_state_function_test_sb_md_interleaved 00:19:18.682 ************************************ 00:19:18.682 19:17:27 bdev_raid -- bdev/bdev_raid.sh@1012 -- # run_test raid_superblock_test_md_interleaved raid_superblock_test raid1 2 00:19:18.682 19:17:28 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:19:18.682 19:17:28 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:18.682 19:17:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:18.682 ************************************ 00:19:18.682 START TEST raid_superblock_test_md_interleaved 00:19:18.682 ************************************ 00:19:18.682 19:17:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:19:18.682 19:17:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:19:18.682 19:17:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:19:18.682 19:17:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:19:18.682 19:17:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:19:18.682 19:17:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:19:18.682 19:17:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:19:18.682 19:17:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:19:18.682 19:17:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:19:18.682 19:17:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:19:18.682 19:17:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@399 -- # local strip_size 00:19:18.682 19:17:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:19:18.682 19:17:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:19:18.682 19:17:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:19:18.682 19:17:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:19:18.682 19:17:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:19:18.682 19:17:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # raid_pid=88769 00:19:18.682 19:17:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:19:18.682 19:17:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@413 -- # waitforlisten 88769 00:19:18.682 19:17:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 88769 ']' 00:19:18.682 19:17:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:18.682 19:17:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:18.682 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:18.682 19:17:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:18.682 19:17:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:18.682 19:17:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:18.682 [2024-11-27 19:17:28.111091] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:19:18.682 [2024-11-27 19:17:28.111204] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88769 ] 00:19:18.682 [2024-11-27 19:17:28.283933] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:18.942 [2024-11-27 19:17:28.389990] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:19.202 [2024-11-27 19:17:28.590015] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:19.202 [2024-11-27 19:17:28.590072] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:19.462 19:17:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:19.462 19:17:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:19:19.462 19:17:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:19:19.462 19:17:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:19.462 19:17:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:19:19.462 19:17:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:19:19.462 19:17:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:19:19.462 19:17:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:19.462 19:17:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:19.462 19:17:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:19.462 19:17:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc1 00:19:19.462 19:17:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.462 19:17:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:19.462 malloc1 00:19:19.462 19:17:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.463 19:17:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:19.463 19:17:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.463 19:17:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:19.463 [2024-11-27 19:17:28.962201] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:19.463 [2024-11-27 19:17:28.962267] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:19.463 [2024-11-27 19:17:28.962288] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:19:19.463 [2024-11-27 19:17:28.962297] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:19.463 [2024-11-27 19:17:28.964092] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:19.463 [2024-11-27 19:17:28.964126] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:19.463 pt1 00:19:19.463 19:17:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.463 19:17:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:19.463 19:17:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:19.463 19:17:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:19:19.463 19:17:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:19:19.463 19:17:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:19:19.463 19:17:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:19.463 19:17:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:19.463 19:17:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:19.463 19:17:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc2 00:19:19.463 19:17:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.463 19:17:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:19.463 malloc2 00:19:19.463 19:17:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.463 19:17:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:19.463 19:17:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.463 19:17:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:19.463 [2024-11-27 19:17:29.018167] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:19.463 [2024-11-27 19:17:29.018223] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:19.463 [2024-11-27 19:17:29.018241] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:19:19.463 [2024-11-27 19:17:29.018249] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:19.463 [2024-11-27 19:17:29.019971] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:19.463 [2024-11-27 19:17:29.020005] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:19.463 pt2 00:19:19.463 19:17:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.463 19:17:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:19.463 19:17:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:19.463 19:17:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:19:19.463 19:17:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.463 19:17:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:19.463 [2024-11-27 19:17:29.030182] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:19.463 [2024-11-27 19:17:29.031892] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:19.463 [2024-11-27 19:17:29.032084] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:19:19.463 [2024-11-27 19:17:29.032096] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:19:19.463 [2024-11-27 19:17:29.032165] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:19:19.463 [2024-11-27 19:17:29.032250] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:19:19.463 [2024-11-27 19:17:29.032280] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:19:19.463 [2024-11-27 19:17:29.032344] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:19.463 19:17:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.463 19:17:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:19.463 19:17:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:19.463 19:17:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:19.463 19:17:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:19.463 19:17:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:19.463 19:17:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:19.463 19:17:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:19.463 19:17:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:19.463 19:17:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:19.463 19:17:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:19.463 19:17:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:19.463 19:17:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:19.463 19:17:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.463 19:17:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:19.463 19:17:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.463 19:17:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:19.463 "name": "raid_bdev1", 00:19:19.463 "uuid": "7d5c588d-81a3-4736-b238-1a96dc9db6c6", 00:19:19.463 "strip_size_kb": 0, 00:19:19.463 "state": "online", 00:19:19.463 "raid_level": "raid1", 00:19:19.463 "superblock": true, 00:19:19.463 "num_base_bdevs": 2, 00:19:19.463 "num_base_bdevs_discovered": 2, 00:19:19.463 "num_base_bdevs_operational": 2, 00:19:19.463 "base_bdevs_list": [ 00:19:19.463 { 00:19:19.463 "name": "pt1", 00:19:19.463 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:19.463 "is_configured": true, 00:19:19.463 "data_offset": 256, 00:19:19.463 "data_size": 7936 00:19:19.463 }, 00:19:19.463 { 00:19:19.463 "name": "pt2", 00:19:19.463 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:19.463 "is_configured": true, 00:19:19.463 "data_offset": 256, 00:19:19.463 "data_size": 7936 00:19:19.463 } 00:19:19.463 ] 00:19:19.463 }' 00:19:19.463 19:17:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:19.463 19:17:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:20.034 19:17:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:19:20.034 19:17:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:19:20.034 19:17:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:20.034 19:17:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:20.034 19:17:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:19:20.034 19:17:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:20.034 19:17:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:20.034 19:17:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:20.034 19:17:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.034 19:17:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:20.034 [2024-11-27 19:17:29.445721] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:20.034 19:17:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.034 19:17:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:20.034 "name": "raid_bdev1", 00:19:20.034 "aliases": [ 00:19:20.034 "7d5c588d-81a3-4736-b238-1a96dc9db6c6" 00:19:20.034 ], 00:19:20.034 "product_name": "Raid Volume", 00:19:20.034 "block_size": 4128, 00:19:20.034 "num_blocks": 7936, 00:19:20.034 "uuid": "7d5c588d-81a3-4736-b238-1a96dc9db6c6", 00:19:20.034 "md_size": 32, 00:19:20.034 "md_interleave": true, 00:19:20.034 "dif_type": 0, 00:19:20.034 "assigned_rate_limits": { 00:19:20.034 "rw_ios_per_sec": 0, 00:19:20.034 "rw_mbytes_per_sec": 0, 00:19:20.034 "r_mbytes_per_sec": 0, 00:19:20.034 "w_mbytes_per_sec": 0 00:19:20.034 }, 00:19:20.034 "claimed": false, 00:19:20.034 "zoned": false, 00:19:20.034 "supported_io_types": { 00:19:20.034 "read": true, 00:19:20.034 "write": true, 00:19:20.034 "unmap": false, 00:19:20.034 "flush": false, 00:19:20.034 "reset": true, 00:19:20.034 "nvme_admin": false, 00:19:20.034 "nvme_io": false, 00:19:20.034 "nvme_io_md": false, 00:19:20.034 "write_zeroes": true, 00:19:20.034 "zcopy": false, 00:19:20.034 "get_zone_info": false, 00:19:20.034 "zone_management": false, 00:19:20.034 "zone_append": false, 00:19:20.034 "compare": false, 00:19:20.034 "compare_and_write": false, 00:19:20.034 "abort": false, 00:19:20.034 "seek_hole": false, 00:19:20.034 "seek_data": false, 00:19:20.034 "copy": false, 00:19:20.034 "nvme_iov_md": false 00:19:20.034 }, 00:19:20.034 "memory_domains": [ 00:19:20.034 { 00:19:20.034 "dma_device_id": "system", 00:19:20.034 "dma_device_type": 1 00:19:20.034 }, 00:19:20.034 { 00:19:20.034 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:20.034 "dma_device_type": 2 00:19:20.034 }, 00:19:20.034 { 00:19:20.034 "dma_device_id": "system", 00:19:20.034 "dma_device_type": 1 00:19:20.034 }, 00:19:20.034 { 00:19:20.034 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:20.034 "dma_device_type": 2 00:19:20.034 } 00:19:20.034 ], 00:19:20.034 "driver_specific": { 00:19:20.034 "raid": { 00:19:20.034 "uuid": "7d5c588d-81a3-4736-b238-1a96dc9db6c6", 00:19:20.034 "strip_size_kb": 0, 00:19:20.034 "state": "online", 00:19:20.034 "raid_level": "raid1", 00:19:20.034 "superblock": true, 00:19:20.034 "num_base_bdevs": 2, 00:19:20.034 "num_base_bdevs_discovered": 2, 00:19:20.034 "num_base_bdevs_operational": 2, 00:19:20.034 "base_bdevs_list": [ 00:19:20.034 { 00:19:20.034 "name": "pt1", 00:19:20.034 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:20.034 "is_configured": true, 00:19:20.034 "data_offset": 256, 00:19:20.034 "data_size": 7936 00:19:20.034 }, 00:19:20.034 { 00:19:20.034 "name": "pt2", 00:19:20.034 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:20.034 "is_configured": true, 00:19:20.034 "data_offset": 256, 00:19:20.034 "data_size": 7936 00:19:20.034 } 00:19:20.034 ] 00:19:20.034 } 00:19:20.034 } 00:19:20.034 }' 00:19:20.034 19:17:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:20.034 19:17:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:19:20.034 pt2' 00:19:20.034 19:17:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:20.034 19:17:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:19:20.034 19:17:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:20.034 19:17:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:19:20.034 19:17:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.034 19:17:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:20.034 19:17:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:20.034 19:17:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.034 19:17:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:19:20.034 19:17:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:19:20.034 19:17:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:20.035 19:17:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:20.035 19:17:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:19:20.035 19:17:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.035 19:17:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:20.035 19:17:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.035 19:17:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:19:20.035 19:17:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:19:20.035 19:17:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:19:20.035 19:17:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:20.035 19:17:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.035 19:17:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:20.035 [2024-11-27 19:17:29.625367] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:20.035 19:17:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.035 19:17:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=7d5c588d-81a3-4736-b238-1a96dc9db6c6 00:19:20.035 19:17:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@436 -- # '[' -z 7d5c588d-81a3-4736-b238-1a96dc9db6c6 ']' 00:19:20.035 19:17:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:20.035 19:17:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.035 19:17:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:20.035 [2024-11-27 19:17:29.653076] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:20.035 [2024-11-27 19:17:29.653111] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:20.035 [2024-11-27 19:17:29.653180] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:20.035 [2024-11-27 19:17:29.653228] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:20.035 [2024-11-27 19:17:29.653239] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:19:20.035 19:17:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.035 19:17:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:20.035 19:17:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.035 19:17:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:19:20.035 19:17:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:20.295 19:17:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.295 19:17:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:19:20.295 19:17:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:19:20.295 19:17:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:20.295 19:17:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:19:20.295 19:17:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.295 19:17:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:20.295 19:17:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.295 19:17:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:20.295 19:17:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:19:20.295 19:17:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.295 19:17:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:20.295 19:17:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.295 19:17:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:19:20.295 19:17:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:19:20.295 19:17:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.295 19:17:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:20.295 19:17:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.295 19:17:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:19:20.295 19:17:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:19:20.295 19:17:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@652 -- # local es=0 00:19:20.295 19:17:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:19:20.295 19:17:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:19:20.295 19:17:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:20.295 19:17:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:19:20.295 19:17:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:20.295 19:17:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:19:20.295 19:17:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.295 19:17:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:20.295 [2024-11-27 19:17:29.788851] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:19:20.295 [2024-11-27 19:17:29.790646] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:19:20.295 [2024-11-27 19:17:29.790726] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:19:20.295 [2024-11-27 19:17:29.790772] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:19:20.295 [2024-11-27 19:17:29.790786] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:20.295 [2024-11-27 19:17:29.790796] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:19:20.295 request: 00:19:20.295 { 00:19:20.295 "name": "raid_bdev1", 00:19:20.295 "raid_level": "raid1", 00:19:20.296 "base_bdevs": [ 00:19:20.296 "malloc1", 00:19:20.296 "malloc2" 00:19:20.296 ], 00:19:20.296 "superblock": false, 00:19:20.296 "method": "bdev_raid_create", 00:19:20.296 "req_id": 1 00:19:20.296 } 00:19:20.296 Got JSON-RPC error response 00:19:20.296 response: 00:19:20.296 { 00:19:20.296 "code": -17, 00:19:20.296 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:19:20.296 } 00:19:20.296 19:17:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:19:20.296 19:17:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@655 -- # es=1 00:19:20.296 19:17:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:20.296 19:17:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:20.296 19:17:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:20.296 19:17:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:20.296 19:17:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:19:20.296 19:17:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.296 19:17:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:20.296 19:17:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.296 19:17:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:19:20.296 19:17:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:19:20.296 19:17:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:20.296 19:17:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.296 19:17:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:20.296 [2024-11-27 19:17:29.852788] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:20.296 [2024-11-27 19:17:29.852835] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:20.296 [2024-11-27 19:17:29.852849] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:19:20.296 [2024-11-27 19:17:29.852859] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:20.296 [2024-11-27 19:17:29.854735] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:20.296 [2024-11-27 19:17:29.854763] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:20.296 [2024-11-27 19:17:29.854802] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:19:20.296 [2024-11-27 19:17:29.854856] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:20.296 pt1 00:19:20.296 19:17:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.296 19:17:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:19:20.296 19:17:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:20.296 19:17:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:20.296 19:17:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:20.296 19:17:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:20.296 19:17:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:20.296 19:17:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:20.296 19:17:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:20.296 19:17:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:20.296 19:17:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:20.296 19:17:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:20.296 19:17:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:20.296 19:17:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.296 19:17:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:20.296 19:17:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.296 19:17:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:20.296 "name": "raid_bdev1", 00:19:20.296 "uuid": "7d5c588d-81a3-4736-b238-1a96dc9db6c6", 00:19:20.296 "strip_size_kb": 0, 00:19:20.296 "state": "configuring", 00:19:20.296 "raid_level": "raid1", 00:19:20.296 "superblock": true, 00:19:20.296 "num_base_bdevs": 2, 00:19:20.296 "num_base_bdevs_discovered": 1, 00:19:20.296 "num_base_bdevs_operational": 2, 00:19:20.296 "base_bdevs_list": [ 00:19:20.296 { 00:19:20.296 "name": "pt1", 00:19:20.296 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:20.296 "is_configured": true, 00:19:20.296 "data_offset": 256, 00:19:20.296 "data_size": 7936 00:19:20.296 }, 00:19:20.296 { 00:19:20.296 "name": null, 00:19:20.296 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:20.296 "is_configured": false, 00:19:20.296 "data_offset": 256, 00:19:20.296 "data_size": 7936 00:19:20.296 } 00:19:20.296 ] 00:19:20.296 }' 00:19:20.296 19:17:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:20.296 19:17:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:20.867 19:17:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:19:20.867 19:17:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:19:20.867 19:17:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:20.867 19:17:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:20.867 19:17:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.867 19:17:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:20.867 [2024-11-27 19:17:30.276018] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:20.867 [2024-11-27 19:17:30.276070] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:20.867 [2024-11-27 19:17:30.276086] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:19:20.867 [2024-11-27 19:17:30.276097] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:20.867 [2024-11-27 19:17:30.276197] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:20.867 [2024-11-27 19:17:30.276236] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:20.867 [2024-11-27 19:17:30.276271] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:20.867 [2024-11-27 19:17:30.276289] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:20.867 [2024-11-27 19:17:30.276359] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:19:20.867 [2024-11-27 19:17:30.276372] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:19:20.867 [2024-11-27 19:17:30.276436] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:19:20.867 [2024-11-27 19:17:30.276497] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:19:20.867 [2024-11-27 19:17:30.276506] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:19:20.867 [2024-11-27 19:17:30.276560] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:20.867 pt2 00:19:20.867 19:17:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.867 19:17:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:19:20.867 19:17:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:20.867 19:17:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:20.867 19:17:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:20.867 19:17:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:20.867 19:17:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:20.867 19:17:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:20.867 19:17:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:20.867 19:17:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:20.867 19:17:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:20.867 19:17:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:20.867 19:17:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:20.867 19:17:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:20.867 19:17:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.867 19:17:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:20.867 19:17:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:20.867 19:17:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.867 19:17:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:20.867 "name": "raid_bdev1", 00:19:20.867 "uuid": "7d5c588d-81a3-4736-b238-1a96dc9db6c6", 00:19:20.867 "strip_size_kb": 0, 00:19:20.867 "state": "online", 00:19:20.867 "raid_level": "raid1", 00:19:20.867 "superblock": true, 00:19:20.867 "num_base_bdevs": 2, 00:19:20.867 "num_base_bdevs_discovered": 2, 00:19:20.867 "num_base_bdevs_operational": 2, 00:19:20.867 "base_bdevs_list": [ 00:19:20.867 { 00:19:20.867 "name": "pt1", 00:19:20.867 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:20.867 "is_configured": true, 00:19:20.867 "data_offset": 256, 00:19:20.867 "data_size": 7936 00:19:20.867 }, 00:19:20.867 { 00:19:20.867 "name": "pt2", 00:19:20.867 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:20.867 "is_configured": true, 00:19:20.867 "data_offset": 256, 00:19:20.867 "data_size": 7936 00:19:20.867 } 00:19:20.867 ] 00:19:20.867 }' 00:19:20.867 19:17:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:20.867 19:17:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:21.128 19:17:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:19:21.128 19:17:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:19:21.128 19:17:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:21.128 19:17:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:21.128 19:17:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:19:21.128 19:17:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:21.128 19:17:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:21.128 19:17:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.128 19:17:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:21.128 19:17:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:21.128 [2024-11-27 19:17:30.719576] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:21.128 19:17:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.128 19:17:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:21.128 "name": "raid_bdev1", 00:19:21.128 "aliases": [ 00:19:21.128 "7d5c588d-81a3-4736-b238-1a96dc9db6c6" 00:19:21.128 ], 00:19:21.128 "product_name": "Raid Volume", 00:19:21.128 "block_size": 4128, 00:19:21.128 "num_blocks": 7936, 00:19:21.128 "uuid": "7d5c588d-81a3-4736-b238-1a96dc9db6c6", 00:19:21.128 "md_size": 32, 00:19:21.128 "md_interleave": true, 00:19:21.128 "dif_type": 0, 00:19:21.128 "assigned_rate_limits": { 00:19:21.128 "rw_ios_per_sec": 0, 00:19:21.128 "rw_mbytes_per_sec": 0, 00:19:21.128 "r_mbytes_per_sec": 0, 00:19:21.128 "w_mbytes_per_sec": 0 00:19:21.128 }, 00:19:21.128 "claimed": false, 00:19:21.128 "zoned": false, 00:19:21.128 "supported_io_types": { 00:19:21.128 "read": true, 00:19:21.128 "write": true, 00:19:21.128 "unmap": false, 00:19:21.128 "flush": false, 00:19:21.128 "reset": true, 00:19:21.128 "nvme_admin": false, 00:19:21.128 "nvme_io": false, 00:19:21.128 "nvme_io_md": false, 00:19:21.128 "write_zeroes": true, 00:19:21.128 "zcopy": false, 00:19:21.128 "get_zone_info": false, 00:19:21.128 "zone_management": false, 00:19:21.128 "zone_append": false, 00:19:21.128 "compare": false, 00:19:21.128 "compare_and_write": false, 00:19:21.128 "abort": false, 00:19:21.128 "seek_hole": false, 00:19:21.128 "seek_data": false, 00:19:21.128 "copy": false, 00:19:21.128 "nvme_iov_md": false 00:19:21.128 }, 00:19:21.128 "memory_domains": [ 00:19:21.128 { 00:19:21.128 "dma_device_id": "system", 00:19:21.128 "dma_device_type": 1 00:19:21.128 }, 00:19:21.128 { 00:19:21.128 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:21.128 "dma_device_type": 2 00:19:21.128 }, 00:19:21.128 { 00:19:21.128 "dma_device_id": "system", 00:19:21.128 "dma_device_type": 1 00:19:21.128 }, 00:19:21.128 { 00:19:21.128 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:21.128 "dma_device_type": 2 00:19:21.128 } 00:19:21.128 ], 00:19:21.128 "driver_specific": { 00:19:21.128 "raid": { 00:19:21.128 "uuid": "7d5c588d-81a3-4736-b238-1a96dc9db6c6", 00:19:21.128 "strip_size_kb": 0, 00:19:21.128 "state": "online", 00:19:21.128 "raid_level": "raid1", 00:19:21.128 "superblock": true, 00:19:21.128 "num_base_bdevs": 2, 00:19:21.128 "num_base_bdevs_discovered": 2, 00:19:21.128 "num_base_bdevs_operational": 2, 00:19:21.128 "base_bdevs_list": [ 00:19:21.128 { 00:19:21.128 "name": "pt1", 00:19:21.128 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:21.128 "is_configured": true, 00:19:21.128 "data_offset": 256, 00:19:21.128 "data_size": 7936 00:19:21.128 }, 00:19:21.128 { 00:19:21.128 "name": "pt2", 00:19:21.128 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:21.128 "is_configured": true, 00:19:21.128 "data_offset": 256, 00:19:21.128 "data_size": 7936 00:19:21.128 } 00:19:21.128 ] 00:19:21.128 } 00:19:21.128 } 00:19:21.128 }' 00:19:21.128 19:17:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:21.389 19:17:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:19:21.389 pt2' 00:19:21.389 19:17:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:21.389 19:17:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:19:21.389 19:17:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:21.390 19:17:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:19:21.390 19:17:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:21.390 19:17:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.390 19:17:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:21.390 19:17:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.390 19:17:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:19:21.390 19:17:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:19:21.390 19:17:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:21.390 19:17:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:19:21.390 19:17:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.390 19:17:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:21.390 19:17:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:21.390 19:17:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.390 19:17:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:19:21.390 19:17:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:19:21.390 19:17:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:21.390 19:17:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.390 19:17:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:21.390 19:17:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:19:21.390 [2024-11-27 19:17:30.927214] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:21.390 19:17:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.390 19:17:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # '[' 7d5c588d-81a3-4736-b238-1a96dc9db6c6 '!=' 7d5c588d-81a3-4736-b238-1a96dc9db6c6 ']' 00:19:21.390 19:17:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:19:21.390 19:17:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:19:21.390 19:17:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:19:21.390 19:17:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:19:21.390 19:17:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.390 19:17:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:21.390 [2024-11-27 19:17:30.970950] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:19:21.390 19:17:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.390 19:17:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:21.390 19:17:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:21.390 19:17:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:21.390 19:17:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:21.390 19:17:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:21.390 19:17:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:21.390 19:17:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:21.390 19:17:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:21.390 19:17:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:21.390 19:17:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:21.390 19:17:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:21.390 19:17:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.390 19:17:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:21.390 19:17:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:21.390 19:17:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.651 19:17:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:21.651 "name": "raid_bdev1", 00:19:21.651 "uuid": "7d5c588d-81a3-4736-b238-1a96dc9db6c6", 00:19:21.651 "strip_size_kb": 0, 00:19:21.651 "state": "online", 00:19:21.651 "raid_level": "raid1", 00:19:21.651 "superblock": true, 00:19:21.651 "num_base_bdevs": 2, 00:19:21.651 "num_base_bdevs_discovered": 1, 00:19:21.651 "num_base_bdevs_operational": 1, 00:19:21.651 "base_bdevs_list": [ 00:19:21.651 { 00:19:21.651 "name": null, 00:19:21.651 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:21.651 "is_configured": false, 00:19:21.651 "data_offset": 0, 00:19:21.651 "data_size": 7936 00:19:21.651 }, 00:19:21.651 { 00:19:21.651 "name": "pt2", 00:19:21.651 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:21.651 "is_configured": true, 00:19:21.651 "data_offset": 256, 00:19:21.651 "data_size": 7936 00:19:21.651 } 00:19:21.651 ] 00:19:21.651 }' 00:19:21.651 19:17:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:21.651 19:17:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:21.912 19:17:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:21.912 19:17:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.912 19:17:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:21.912 [2024-11-27 19:17:31.438160] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:21.912 [2024-11-27 19:17:31.438184] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:21.912 [2024-11-27 19:17:31.438232] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:21.912 [2024-11-27 19:17:31.438267] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:21.912 [2024-11-27 19:17:31.438277] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:19:21.912 19:17:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.912 19:17:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:21.912 19:17:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:19:21.912 19:17:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.912 19:17:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:21.912 19:17:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.912 19:17:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:19:21.912 19:17:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:19:21.912 19:17:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:19:21.912 19:17:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:19:21.912 19:17:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:19:21.912 19:17:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.912 19:17:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:21.912 19:17:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.912 19:17:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:19:21.912 19:17:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:19:21.912 19:17:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:19:21.912 19:17:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:19:21.912 19:17:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@519 -- # i=1 00:19:21.912 19:17:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:21.912 19:17:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.912 19:17:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:21.912 [2024-11-27 19:17:31.510049] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:21.912 [2024-11-27 19:17:31.510146] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:21.912 [2024-11-27 19:17:31.510176] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:19:21.912 [2024-11-27 19:17:31.510203] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:21.912 [2024-11-27 19:17:31.512102] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:21.912 [2024-11-27 19:17:31.512184] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:21.912 [2024-11-27 19:17:31.512243] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:21.912 [2024-11-27 19:17:31.512317] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:21.912 [2024-11-27 19:17:31.512393] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:19:21.912 [2024-11-27 19:17:31.512429] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:19:21.912 [2024-11-27 19:17:31.512528] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:19:21.912 [2024-11-27 19:17:31.512623] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:19:21.912 [2024-11-27 19:17:31.512657] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:19:21.912 [2024-11-27 19:17:31.512767] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:21.912 pt2 00:19:21.912 19:17:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.912 19:17:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:21.912 19:17:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:21.912 19:17:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:21.912 19:17:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:21.912 19:17:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:21.912 19:17:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:21.912 19:17:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:21.912 19:17:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:21.912 19:17:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:21.912 19:17:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:21.912 19:17:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:21.913 19:17:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:21.913 19:17:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.913 19:17:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:21.913 19:17:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.173 19:17:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:22.173 "name": "raid_bdev1", 00:19:22.173 "uuid": "7d5c588d-81a3-4736-b238-1a96dc9db6c6", 00:19:22.173 "strip_size_kb": 0, 00:19:22.173 "state": "online", 00:19:22.173 "raid_level": "raid1", 00:19:22.173 "superblock": true, 00:19:22.173 "num_base_bdevs": 2, 00:19:22.173 "num_base_bdevs_discovered": 1, 00:19:22.173 "num_base_bdevs_operational": 1, 00:19:22.173 "base_bdevs_list": [ 00:19:22.173 { 00:19:22.173 "name": null, 00:19:22.173 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:22.173 "is_configured": false, 00:19:22.173 "data_offset": 256, 00:19:22.173 "data_size": 7936 00:19:22.173 }, 00:19:22.173 { 00:19:22.173 "name": "pt2", 00:19:22.173 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:22.173 "is_configured": true, 00:19:22.173 "data_offset": 256, 00:19:22.173 "data_size": 7936 00:19:22.173 } 00:19:22.173 ] 00:19:22.173 }' 00:19:22.173 19:17:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:22.173 19:17:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:22.433 19:17:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:22.433 19:17:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.433 19:17:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:22.433 [2024-11-27 19:17:31.921308] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:22.433 [2024-11-27 19:17:31.921335] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:22.433 [2024-11-27 19:17:31.921385] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:22.433 [2024-11-27 19:17:31.921425] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:22.433 [2024-11-27 19:17:31.921433] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:19:22.433 19:17:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.433 19:17:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:22.433 19:17:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:19:22.433 19:17:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.433 19:17:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:22.433 19:17:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.433 19:17:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:19:22.433 19:17:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:19:22.433 19:17:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:19:22.433 19:17:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:22.433 19:17:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.433 19:17:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:22.433 [2024-11-27 19:17:31.981229] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:22.433 [2024-11-27 19:17:31.981326] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:22.433 [2024-11-27 19:17:31.981346] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:19:22.433 [2024-11-27 19:17:31.981354] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:22.433 [2024-11-27 19:17:31.983180] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:22.433 [2024-11-27 19:17:31.983217] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:22.433 [2024-11-27 19:17:31.983262] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:19:22.433 [2024-11-27 19:17:31.983306] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:22.433 [2024-11-27 19:17:31.983395] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:19:22.433 [2024-11-27 19:17:31.983404] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:22.433 [2024-11-27 19:17:31.983418] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:19:22.433 [2024-11-27 19:17:31.983478] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:22.433 [2024-11-27 19:17:31.983536] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:19:22.433 [2024-11-27 19:17:31.983544] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:19:22.433 [2024-11-27 19:17:31.983605] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:19:22.433 [2024-11-27 19:17:31.983658] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:19:22.433 [2024-11-27 19:17:31.983677] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:19:22.433 [2024-11-27 19:17:31.983770] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:22.433 pt1 00:19:22.433 19:17:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.433 19:17:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:19:22.433 19:17:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:22.433 19:17:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:22.433 19:17:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:22.433 19:17:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:22.433 19:17:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:22.433 19:17:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:22.433 19:17:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:22.433 19:17:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:22.434 19:17:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:22.434 19:17:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:22.434 19:17:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:22.434 19:17:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:22.434 19:17:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.434 19:17:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:22.434 19:17:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.434 19:17:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:22.434 "name": "raid_bdev1", 00:19:22.434 "uuid": "7d5c588d-81a3-4736-b238-1a96dc9db6c6", 00:19:22.434 "strip_size_kb": 0, 00:19:22.434 "state": "online", 00:19:22.434 "raid_level": "raid1", 00:19:22.434 "superblock": true, 00:19:22.434 "num_base_bdevs": 2, 00:19:22.434 "num_base_bdevs_discovered": 1, 00:19:22.434 "num_base_bdevs_operational": 1, 00:19:22.434 "base_bdevs_list": [ 00:19:22.434 { 00:19:22.434 "name": null, 00:19:22.434 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:22.434 "is_configured": false, 00:19:22.434 "data_offset": 256, 00:19:22.434 "data_size": 7936 00:19:22.434 }, 00:19:22.434 { 00:19:22.434 "name": "pt2", 00:19:22.434 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:22.434 "is_configured": true, 00:19:22.434 "data_offset": 256, 00:19:22.434 "data_size": 7936 00:19:22.434 } 00:19:22.434 ] 00:19:22.434 }' 00:19:22.434 19:17:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:22.434 19:17:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:23.002 19:17:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:19:23.002 19:17:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:19:23.002 19:17:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.002 19:17:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:23.002 19:17:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.002 19:17:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:19:23.002 19:17:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:23.002 19:17:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:19:23.002 19:17:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.003 19:17:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:23.003 [2024-11-27 19:17:32.488647] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:23.003 19:17:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.003 19:17:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # '[' 7d5c588d-81a3-4736-b238-1a96dc9db6c6 '!=' 7d5c588d-81a3-4736-b238-1a96dc9db6c6 ']' 00:19:23.003 19:17:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@563 -- # killprocess 88769 00:19:23.003 19:17:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 88769 ']' 00:19:23.003 19:17:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 88769 00:19:23.003 19:17:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:19:23.003 19:17:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:23.003 19:17:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88769 00:19:23.003 19:17:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:23.003 19:17:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:23.003 killing process with pid 88769 00:19:23.003 19:17:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88769' 00:19:23.003 19:17:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@973 -- # kill 88769 00:19:23.003 [2024-11-27 19:17:32.575257] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:23.003 [2024-11-27 19:17:32.575319] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:23.003 [2024-11-27 19:17:32.575353] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:23.003 [2024-11-27 19:17:32.575366] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:19:23.003 19:17:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@978 -- # wait 88769 00:19:23.261 [2024-11-27 19:17:32.768990] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:24.275 19:17:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@565 -- # return 0 00:19:24.275 00:19:24.275 real 0m5.796s 00:19:24.275 user 0m8.727s 00:19:24.275 sys 0m1.109s 00:19:24.275 19:17:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:24.275 19:17:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:24.275 ************************************ 00:19:24.275 END TEST raid_superblock_test_md_interleaved 00:19:24.275 ************************************ 00:19:24.275 19:17:33 bdev_raid -- bdev/bdev_raid.sh@1013 -- # run_test raid_rebuild_test_sb_md_interleaved raid_rebuild_test raid1 2 true false false 00:19:24.275 19:17:33 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:19:24.275 19:17:33 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:24.275 19:17:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:24.275 ************************************ 00:19:24.275 START TEST raid_rebuild_test_sb_md_interleaved 00:19:24.275 ************************************ 00:19:24.275 19:17:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false false 00:19:24.275 19:17:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:19:24.275 19:17:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:19:24.275 19:17:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:19:24.275 19:17:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:19:24.275 19:17:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # local verify=false 00:19:24.275 19:17:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:19:24.275 19:17:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:24.275 19:17:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:19:24.275 19:17:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:24.275 19:17:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:24.275 19:17:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:19:24.275 19:17:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:24.275 19:17:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:24.275 19:17:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:19:24.275 19:17:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:19:24.275 19:17:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:19:24.275 19:17:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # local strip_size 00:19:24.275 19:17:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@577 -- # local create_arg 00:19:24.275 19:17:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:19:24.275 19:17:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@579 -- # local data_offset 00:19:24.275 19:17:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:19:24.535 19:17:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:19:24.535 19:17:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:19:24.535 19:17:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:19:24.535 19:17:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@597 -- # raid_pid=89096 00:19:24.535 19:17:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@598 -- # waitforlisten 89096 00:19:24.535 19:17:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:19:24.535 19:17:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 89096 ']' 00:19:24.535 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:24.535 19:17:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:24.535 19:17:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:24.535 19:17:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:24.535 19:17:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:24.535 19:17:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:24.535 [2024-11-27 19:17:34.009739] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:19:24.535 I/O size of 3145728 is greater than zero copy threshold (65536). 00:19:24.535 Zero copy mechanism will not be used. 00:19:24.535 [2024-11-27 19:17:34.009935] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89096 ] 00:19:24.794 [2024-11-27 19:17:34.191182] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:24.794 [2024-11-27 19:17:34.297624] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:25.054 [2024-11-27 19:17:34.474460] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:25.054 [2024-11-27 19:17:34.474499] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:25.314 19:17:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:25.314 19:17:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:19:25.314 19:17:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:25.314 19:17:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1_malloc 00:19:25.314 19:17:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.314 19:17:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:25.314 BaseBdev1_malloc 00:19:25.314 19:17:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.314 19:17:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:25.314 19:17:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.314 19:17:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:25.314 [2024-11-27 19:17:34.854241] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:25.314 [2024-11-27 19:17:34.854311] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:25.314 [2024-11-27 19:17:34.854333] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:19:25.314 [2024-11-27 19:17:34.854343] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:25.314 [2024-11-27 19:17:34.856081] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:25.314 [2024-11-27 19:17:34.856122] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:25.314 BaseBdev1 00:19:25.314 19:17:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.314 19:17:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:25.314 19:17:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2_malloc 00:19:25.314 19:17:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.314 19:17:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:25.314 BaseBdev2_malloc 00:19:25.314 19:17:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.314 19:17:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:19:25.314 19:17:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.314 19:17:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:25.314 [2024-11-27 19:17:34.903541] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:19:25.314 [2024-11-27 19:17:34.903600] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:25.314 [2024-11-27 19:17:34.903618] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:19:25.314 [2024-11-27 19:17:34.903630] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:25.314 [2024-11-27 19:17:34.905372] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:25.314 [2024-11-27 19:17:34.905493] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:25.314 BaseBdev2 00:19:25.314 19:17:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.314 19:17:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b spare_malloc 00:19:25.314 19:17:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.314 19:17:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:25.574 spare_malloc 00:19:25.574 19:17:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.574 19:17:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:19:25.574 19:17:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.574 19:17:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:25.574 spare_delay 00:19:25.574 19:17:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.574 19:17:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:25.574 19:17:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.574 19:17:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:25.574 [2024-11-27 19:17:34.975794] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:25.574 [2024-11-27 19:17:34.975850] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:25.574 [2024-11-27 19:17:34.975868] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:19:25.574 [2024-11-27 19:17:34.975888] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:25.574 [2024-11-27 19:17:34.977616] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:25.574 [2024-11-27 19:17:34.977656] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:25.574 spare 00:19:25.574 19:17:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.574 19:17:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:19:25.574 19:17:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.575 19:17:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:25.575 [2024-11-27 19:17:34.987822] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:25.575 [2024-11-27 19:17:34.989504] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:25.575 [2024-11-27 19:17:34.989700] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:19:25.575 [2024-11-27 19:17:34.989715] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:19:25.575 [2024-11-27 19:17:34.989784] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:19:25.575 [2024-11-27 19:17:34.989850] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:19:25.575 [2024-11-27 19:17:34.989857] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:19:25.575 [2024-11-27 19:17:34.989920] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:25.575 19:17:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.575 19:17:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:25.575 19:17:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:25.575 19:17:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:25.575 19:17:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:25.575 19:17:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:25.575 19:17:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:25.575 19:17:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:25.575 19:17:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:25.575 19:17:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:25.575 19:17:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:25.575 19:17:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:25.575 19:17:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:25.575 19:17:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.575 19:17:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:25.575 19:17:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.575 19:17:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:25.575 "name": "raid_bdev1", 00:19:25.575 "uuid": "f8fb9acc-cb3f-445f-8a02-d9c9ed1ee324", 00:19:25.575 "strip_size_kb": 0, 00:19:25.575 "state": "online", 00:19:25.575 "raid_level": "raid1", 00:19:25.575 "superblock": true, 00:19:25.575 "num_base_bdevs": 2, 00:19:25.575 "num_base_bdevs_discovered": 2, 00:19:25.575 "num_base_bdevs_operational": 2, 00:19:25.575 "base_bdevs_list": [ 00:19:25.575 { 00:19:25.575 "name": "BaseBdev1", 00:19:25.575 "uuid": "42c4a1c0-bd6d-575e-89a3-5008ee2309e4", 00:19:25.575 "is_configured": true, 00:19:25.575 "data_offset": 256, 00:19:25.575 "data_size": 7936 00:19:25.575 }, 00:19:25.575 { 00:19:25.575 "name": "BaseBdev2", 00:19:25.575 "uuid": "4b5ac791-67f8-50d9-a4a4-4e22fe8a272a", 00:19:25.575 "is_configured": true, 00:19:25.575 "data_offset": 256, 00:19:25.575 "data_size": 7936 00:19:25.575 } 00:19:25.575 ] 00:19:25.575 }' 00:19:25.575 19:17:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:25.575 19:17:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:25.835 19:17:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:25.835 19:17:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.835 19:17:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:25.835 19:17:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:19:25.835 [2024-11-27 19:17:35.455228] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:25.835 19:17:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.094 19:17:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:19:26.094 19:17:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:26.094 19:17:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:19:26.094 19:17:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.094 19:17:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:26.094 19:17:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.094 19:17:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:19:26.094 19:17:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:19:26.094 19:17:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@624 -- # '[' false = true ']' 00:19:26.094 19:17:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:19:26.094 19:17:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.095 19:17:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:26.095 [2024-11-27 19:17:35.542799] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:26.095 19:17:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.095 19:17:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:26.095 19:17:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:26.095 19:17:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:26.095 19:17:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:26.095 19:17:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:26.095 19:17:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:26.095 19:17:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:26.095 19:17:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:26.095 19:17:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:26.095 19:17:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:26.095 19:17:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:26.095 19:17:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:26.095 19:17:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.095 19:17:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:26.095 19:17:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.095 19:17:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:26.095 "name": "raid_bdev1", 00:19:26.095 "uuid": "f8fb9acc-cb3f-445f-8a02-d9c9ed1ee324", 00:19:26.095 "strip_size_kb": 0, 00:19:26.095 "state": "online", 00:19:26.095 "raid_level": "raid1", 00:19:26.095 "superblock": true, 00:19:26.095 "num_base_bdevs": 2, 00:19:26.095 "num_base_bdevs_discovered": 1, 00:19:26.095 "num_base_bdevs_operational": 1, 00:19:26.095 "base_bdevs_list": [ 00:19:26.095 { 00:19:26.095 "name": null, 00:19:26.095 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:26.095 "is_configured": false, 00:19:26.095 "data_offset": 0, 00:19:26.095 "data_size": 7936 00:19:26.095 }, 00:19:26.095 { 00:19:26.095 "name": "BaseBdev2", 00:19:26.095 "uuid": "4b5ac791-67f8-50d9-a4a4-4e22fe8a272a", 00:19:26.095 "is_configured": true, 00:19:26.095 "data_offset": 256, 00:19:26.095 "data_size": 7936 00:19:26.095 } 00:19:26.095 ] 00:19:26.095 }' 00:19:26.095 19:17:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:26.095 19:17:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:26.355 19:17:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:26.355 19:17:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.355 19:17:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:26.355 [2024-11-27 19:17:35.978126] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:26.615 [2024-11-27 19:17:35.994073] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:19:26.615 19:17:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.615 19:17:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@647 -- # sleep 1 00:19:26.615 [2024-11-27 19:17:35.995865] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:27.555 19:17:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:27.555 19:17:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:27.555 19:17:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:27.555 19:17:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:27.555 19:17:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:27.555 19:17:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:27.555 19:17:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:27.555 19:17:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.555 19:17:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:27.555 19:17:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.555 19:17:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:27.555 "name": "raid_bdev1", 00:19:27.555 "uuid": "f8fb9acc-cb3f-445f-8a02-d9c9ed1ee324", 00:19:27.555 "strip_size_kb": 0, 00:19:27.555 "state": "online", 00:19:27.555 "raid_level": "raid1", 00:19:27.555 "superblock": true, 00:19:27.555 "num_base_bdevs": 2, 00:19:27.555 "num_base_bdevs_discovered": 2, 00:19:27.555 "num_base_bdevs_operational": 2, 00:19:27.555 "process": { 00:19:27.555 "type": "rebuild", 00:19:27.555 "target": "spare", 00:19:27.556 "progress": { 00:19:27.556 "blocks": 2560, 00:19:27.556 "percent": 32 00:19:27.556 } 00:19:27.556 }, 00:19:27.556 "base_bdevs_list": [ 00:19:27.556 { 00:19:27.556 "name": "spare", 00:19:27.556 "uuid": "4263d4b1-21f5-52b7-aa4e-bc9db2e7d59e", 00:19:27.556 "is_configured": true, 00:19:27.556 "data_offset": 256, 00:19:27.556 "data_size": 7936 00:19:27.556 }, 00:19:27.556 { 00:19:27.556 "name": "BaseBdev2", 00:19:27.556 "uuid": "4b5ac791-67f8-50d9-a4a4-4e22fe8a272a", 00:19:27.556 "is_configured": true, 00:19:27.556 "data_offset": 256, 00:19:27.556 "data_size": 7936 00:19:27.556 } 00:19:27.556 ] 00:19:27.556 }' 00:19:27.556 19:17:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:27.556 19:17:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:27.556 19:17:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:27.556 19:17:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:27.556 19:17:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:27.556 19:17:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.556 19:17:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:27.556 [2024-11-27 19:17:37.160248] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:27.816 [2024-11-27 19:17:37.200684] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:27.816 [2024-11-27 19:17:37.200749] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:27.816 [2024-11-27 19:17:37.200763] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:27.816 [2024-11-27 19:17:37.200775] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:27.816 19:17:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.816 19:17:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:27.816 19:17:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:27.816 19:17:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:27.816 19:17:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:27.817 19:17:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:27.817 19:17:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:27.817 19:17:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:27.817 19:17:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:27.817 19:17:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:27.817 19:17:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:27.817 19:17:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:27.817 19:17:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:27.817 19:17:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.817 19:17:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:27.817 19:17:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.817 19:17:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:27.817 "name": "raid_bdev1", 00:19:27.817 "uuid": "f8fb9acc-cb3f-445f-8a02-d9c9ed1ee324", 00:19:27.817 "strip_size_kb": 0, 00:19:27.817 "state": "online", 00:19:27.817 "raid_level": "raid1", 00:19:27.817 "superblock": true, 00:19:27.817 "num_base_bdevs": 2, 00:19:27.817 "num_base_bdevs_discovered": 1, 00:19:27.817 "num_base_bdevs_operational": 1, 00:19:27.817 "base_bdevs_list": [ 00:19:27.817 { 00:19:27.817 "name": null, 00:19:27.817 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:27.817 "is_configured": false, 00:19:27.817 "data_offset": 0, 00:19:27.817 "data_size": 7936 00:19:27.817 }, 00:19:27.817 { 00:19:27.817 "name": "BaseBdev2", 00:19:27.817 "uuid": "4b5ac791-67f8-50d9-a4a4-4e22fe8a272a", 00:19:27.817 "is_configured": true, 00:19:27.817 "data_offset": 256, 00:19:27.817 "data_size": 7936 00:19:27.817 } 00:19:27.817 ] 00:19:27.817 }' 00:19:27.817 19:17:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:27.817 19:17:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:28.078 19:17:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:28.078 19:17:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:28.078 19:17:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:28.078 19:17:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:28.078 19:17:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:28.078 19:17:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:28.078 19:17:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:28.078 19:17:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.078 19:17:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:28.078 19:17:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.078 19:17:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:28.078 "name": "raid_bdev1", 00:19:28.078 "uuid": "f8fb9acc-cb3f-445f-8a02-d9c9ed1ee324", 00:19:28.078 "strip_size_kb": 0, 00:19:28.078 "state": "online", 00:19:28.078 "raid_level": "raid1", 00:19:28.078 "superblock": true, 00:19:28.078 "num_base_bdevs": 2, 00:19:28.078 "num_base_bdevs_discovered": 1, 00:19:28.078 "num_base_bdevs_operational": 1, 00:19:28.078 "base_bdevs_list": [ 00:19:28.078 { 00:19:28.078 "name": null, 00:19:28.078 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:28.078 "is_configured": false, 00:19:28.078 "data_offset": 0, 00:19:28.078 "data_size": 7936 00:19:28.078 }, 00:19:28.078 { 00:19:28.078 "name": "BaseBdev2", 00:19:28.078 "uuid": "4b5ac791-67f8-50d9-a4a4-4e22fe8a272a", 00:19:28.078 "is_configured": true, 00:19:28.078 "data_offset": 256, 00:19:28.078 "data_size": 7936 00:19:28.078 } 00:19:28.078 ] 00:19:28.078 }' 00:19:28.078 19:17:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:28.339 19:17:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:28.339 19:17:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:28.339 19:17:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:28.339 19:17:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:28.339 19:17:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.339 19:17:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:28.339 [2024-11-27 19:17:37.777733] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:28.339 [2024-11-27 19:17:37.792727] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:19:28.339 19:17:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.339 19:17:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@663 -- # sleep 1 00:19:28.339 [2024-11-27 19:17:37.794462] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:29.281 19:17:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:29.281 19:17:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:29.281 19:17:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:29.281 19:17:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:29.281 19:17:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:29.281 19:17:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:29.281 19:17:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:29.281 19:17:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.281 19:17:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:29.281 19:17:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.281 19:17:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:29.281 "name": "raid_bdev1", 00:19:29.281 "uuid": "f8fb9acc-cb3f-445f-8a02-d9c9ed1ee324", 00:19:29.281 "strip_size_kb": 0, 00:19:29.281 "state": "online", 00:19:29.281 "raid_level": "raid1", 00:19:29.281 "superblock": true, 00:19:29.281 "num_base_bdevs": 2, 00:19:29.281 "num_base_bdevs_discovered": 2, 00:19:29.281 "num_base_bdevs_operational": 2, 00:19:29.281 "process": { 00:19:29.281 "type": "rebuild", 00:19:29.281 "target": "spare", 00:19:29.281 "progress": { 00:19:29.281 "blocks": 2560, 00:19:29.281 "percent": 32 00:19:29.281 } 00:19:29.281 }, 00:19:29.281 "base_bdevs_list": [ 00:19:29.281 { 00:19:29.281 "name": "spare", 00:19:29.281 "uuid": "4263d4b1-21f5-52b7-aa4e-bc9db2e7d59e", 00:19:29.281 "is_configured": true, 00:19:29.281 "data_offset": 256, 00:19:29.281 "data_size": 7936 00:19:29.281 }, 00:19:29.281 { 00:19:29.281 "name": "BaseBdev2", 00:19:29.281 "uuid": "4b5ac791-67f8-50d9-a4a4-4e22fe8a272a", 00:19:29.281 "is_configured": true, 00:19:29.281 "data_offset": 256, 00:19:29.281 "data_size": 7936 00:19:29.281 } 00:19:29.281 ] 00:19:29.281 }' 00:19:29.281 19:17:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:29.281 19:17:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:29.281 19:17:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:29.541 19:17:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:29.541 19:17:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:19:29.541 19:17:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:19:29.541 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:19:29.541 19:17:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:19:29.541 19:17:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:19:29.541 19:17:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:19:29.541 19:17:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # local timeout=740 00:19:29.541 19:17:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:29.541 19:17:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:29.541 19:17:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:29.541 19:17:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:29.541 19:17:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:29.541 19:17:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:29.541 19:17:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:29.541 19:17:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:29.541 19:17:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.541 19:17:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:29.541 19:17:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.541 19:17:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:29.541 "name": "raid_bdev1", 00:19:29.541 "uuid": "f8fb9acc-cb3f-445f-8a02-d9c9ed1ee324", 00:19:29.541 "strip_size_kb": 0, 00:19:29.541 "state": "online", 00:19:29.541 "raid_level": "raid1", 00:19:29.541 "superblock": true, 00:19:29.541 "num_base_bdevs": 2, 00:19:29.541 "num_base_bdevs_discovered": 2, 00:19:29.541 "num_base_bdevs_operational": 2, 00:19:29.541 "process": { 00:19:29.541 "type": "rebuild", 00:19:29.541 "target": "spare", 00:19:29.541 "progress": { 00:19:29.541 "blocks": 2816, 00:19:29.541 "percent": 35 00:19:29.541 } 00:19:29.541 }, 00:19:29.542 "base_bdevs_list": [ 00:19:29.542 { 00:19:29.542 "name": "spare", 00:19:29.542 "uuid": "4263d4b1-21f5-52b7-aa4e-bc9db2e7d59e", 00:19:29.542 "is_configured": true, 00:19:29.542 "data_offset": 256, 00:19:29.542 "data_size": 7936 00:19:29.542 }, 00:19:29.542 { 00:19:29.542 "name": "BaseBdev2", 00:19:29.542 "uuid": "4b5ac791-67f8-50d9-a4a4-4e22fe8a272a", 00:19:29.542 "is_configured": true, 00:19:29.542 "data_offset": 256, 00:19:29.542 "data_size": 7936 00:19:29.542 } 00:19:29.542 ] 00:19:29.542 }' 00:19:29.542 19:17:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:29.542 19:17:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:29.542 19:17:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:29.542 19:17:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:29.542 19:17:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:30.484 19:17:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:30.484 19:17:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:30.484 19:17:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:30.484 19:17:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:30.484 19:17:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:30.484 19:17:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:30.484 19:17:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:30.484 19:17:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:30.484 19:17:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.484 19:17:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:30.484 19:17:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.743 19:17:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:30.743 "name": "raid_bdev1", 00:19:30.744 "uuid": "f8fb9acc-cb3f-445f-8a02-d9c9ed1ee324", 00:19:30.744 "strip_size_kb": 0, 00:19:30.744 "state": "online", 00:19:30.744 "raid_level": "raid1", 00:19:30.744 "superblock": true, 00:19:30.744 "num_base_bdevs": 2, 00:19:30.744 "num_base_bdevs_discovered": 2, 00:19:30.744 "num_base_bdevs_operational": 2, 00:19:30.744 "process": { 00:19:30.744 "type": "rebuild", 00:19:30.744 "target": "spare", 00:19:30.744 "progress": { 00:19:30.744 "blocks": 5632, 00:19:30.744 "percent": 70 00:19:30.744 } 00:19:30.744 }, 00:19:30.744 "base_bdevs_list": [ 00:19:30.744 { 00:19:30.744 "name": "spare", 00:19:30.744 "uuid": "4263d4b1-21f5-52b7-aa4e-bc9db2e7d59e", 00:19:30.744 "is_configured": true, 00:19:30.744 "data_offset": 256, 00:19:30.744 "data_size": 7936 00:19:30.744 }, 00:19:30.744 { 00:19:30.744 "name": "BaseBdev2", 00:19:30.744 "uuid": "4b5ac791-67f8-50d9-a4a4-4e22fe8a272a", 00:19:30.744 "is_configured": true, 00:19:30.744 "data_offset": 256, 00:19:30.744 "data_size": 7936 00:19:30.744 } 00:19:30.744 ] 00:19:30.744 }' 00:19:30.744 19:17:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:30.744 19:17:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:30.744 19:17:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:30.744 19:17:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:30.744 19:17:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:31.314 [2024-11-27 19:17:40.906227] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:19:31.314 [2024-11-27 19:17:40.906365] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:19:31.314 [2024-11-27 19:17:40.906464] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:31.884 19:17:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:31.884 19:17:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:31.884 19:17:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:31.884 19:17:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:31.884 19:17:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:31.884 19:17:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:31.884 19:17:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:31.884 19:17:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.884 19:17:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:31.884 19:17:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:31.884 19:17:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.884 19:17:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:31.884 "name": "raid_bdev1", 00:19:31.884 "uuid": "f8fb9acc-cb3f-445f-8a02-d9c9ed1ee324", 00:19:31.884 "strip_size_kb": 0, 00:19:31.884 "state": "online", 00:19:31.884 "raid_level": "raid1", 00:19:31.884 "superblock": true, 00:19:31.884 "num_base_bdevs": 2, 00:19:31.884 "num_base_bdevs_discovered": 2, 00:19:31.884 "num_base_bdevs_operational": 2, 00:19:31.884 "base_bdevs_list": [ 00:19:31.884 { 00:19:31.885 "name": "spare", 00:19:31.885 "uuid": "4263d4b1-21f5-52b7-aa4e-bc9db2e7d59e", 00:19:31.885 "is_configured": true, 00:19:31.885 "data_offset": 256, 00:19:31.885 "data_size": 7936 00:19:31.885 }, 00:19:31.885 { 00:19:31.885 "name": "BaseBdev2", 00:19:31.885 "uuid": "4b5ac791-67f8-50d9-a4a4-4e22fe8a272a", 00:19:31.885 "is_configured": true, 00:19:31.885 "data_offset": 256, 00:19:31.885 "data_size": 7936 00:19:31.885 } 00:19:31.885 ] 00:19:31.885 }' 00:19:31.885 19:17:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:31.885 19:17:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:19:31.885 19:17:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:31.885 19:17:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:19:31.885 19:17:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@709 -- # break 00:19:31.885 19:17:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:31.885 19:17:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:31.885 19:17:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:31.885 19:17:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:31.885 19:17:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:31.885 19:17:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:31.885 19:17:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:31.885 19:17:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.885 19:17:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:31.885 19:17:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.885 19:17:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:31.885 "name": "raid_bdev1", 00:19:31.885 "uuid": "f8fb9acc-cb3f-445f-8a02-d9c9ed1ee324", 00:19:31.885 "strip_size_kb": 0, 00:19:31.885 "state": "online", 00:19:31.885 "raid_level": "raid1", 00:19:31.885 "superblock": true, 00:19:31.885 "num_base_bdevs": 2, 00:19:31.885 "num_base_bdevs_discovered": 2, 00:19:31.885 "num_base_bdevs_operational": 2, 00:19:31.885 "base_bdevs_list": [ 00:19:31.885 { 00:19:31.885 "name": "spare", 00:19:31.885 "uuid": "4263d4b1-21f5-52b7-aa4e-bc9db2e7d59e", 00:19:31.885 "is_configured": true, 00:19:31.885 "data_offset": 256, 00:19:31.885 "data_size": 7936 00:19:31.885 }, 00:19:31.885 { 00:19:31.885 "name": "BaseBdev2", 00:19:31.885 "uuid": "4b5ac791-67f8-50d9-a4a4-4e22fe8a272a", 00:19:31.885 "is_configured": true, 00:19:31.885 "data_offset": 256, 00:19:31.885 "data_size": 7936 00:19:31.885 } 00:19:31.885 ] 00:19:31.885 }' 00:19:31.885 19:17:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:31.885 19:17:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:31.885 19:17:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:31.885 19:17:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:31.885 19:17:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:31.885 19:17:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:31.885 19:17:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:31.885 19:17:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:31.885 19:17:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:31.885 19:17:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:31.885 19:17:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:31.885 19:17:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:31.885 19:17:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:31.885 19:17:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:31.885 19:17:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:31.885 19:17:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:31.885 19:17:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.885 19:17:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:32.146 19:17:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.146 19:17:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:32.146 "name": "raid_bdev1", 00:19:32.146 "uuid": "f8fb9acc-cb3f-445f-8a02-d9c9ed1ee324", 00:19:32.146 "strip_size_kb": 0, 00:19:32.146 "state": "online", 00:19:32.146 "raid_level": "raid1", 00:19:32.146 "superblock": true, 00:19:32.146 "num_base_bdevs": 2, 00:19:32.146 "num_base_bdevs_discovered": 2, 00:19:32.146 "num_base_bdevs_operational": 2, 00:19:32.146 "base_bdevs_list": [ 00:19:32.146 { 00:19:32.146 "name": "spare", 00:19:32.146 "uuid": "4263d4b1-21f5-52b7-aa4e-bc9db2e7d59e", 00:19:32.146 "is_configured": true, 00:19:32.146 "data_offset": 256, 00:19:32.146 "data_size": 7936 00:19:32.146 }, 00:19:32.146 { 00:19:32.146 "name": "BaseBdev2", 00:19:32.146 "uuid": "4b5ac791-67f8-50d9-a4a4-4e22fe8a272a", 00:19:32.146 "is_configured": true, 00:19:32.146 "data_offset": 256, 00:19:32.146 "data_size": 7936 00:19:32.146 } 00:19:32.146 ] 00:19:32.146 }' 00:19:32.146 19:17:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:32.146 19:17:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:32.407 19:17:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:32.407 19:17:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.407 19:17:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:32.407 [2024-11-27 19:17:41.914183] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:32.407 [2024-11-27 19:17:41.914217] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:32.407 [2024-11-27 19:17:41.914295] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:32.407 [2024-11-27 19:17:41.914354] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:32.407 [2024-11-27 19:17:41.914364] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:19:32.407 19:17:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.407 19:17:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:32.407 19:17:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # jq length 00:19:32.407 19:17:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.407 19:17:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:32.407 19:17:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.407 19:17:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:19:32.407 19:17:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:19:32.407 19:17:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:19:32.407 19:17:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:19:32.407 19:17:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.407 19:17:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:32.407 19:17:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.408 19:17:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:32.408 19:17:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.408 19:17:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:32.408 [2024-11-27 19:17:41.990046] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:32.408 [2024-11-27 19:17:41.990098] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:32.408 [2024-11-27 19:17:41.990119] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:19:32.408 [2024-11-27 19:17:41.990129] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:32.408 [2024-11-27 19:17:41.992070] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:32.408 [2024-11-27 19:17:41.992167] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:32.408 [2024-11-27 19:17:41.992224] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:19:32.408 [2024-11-27 19:17:41.992277] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:32.408 [2024-11-27 19:17:41.992385] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:32.408 spare 00:19:32.408 19:17:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.408 19:17:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:19:32.408 19:17:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.408 19:17:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:32.668 [2024-11-27 19:17:42.092273] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:19:32.668 [2024-11-27 19:17:42.092302] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:19:32.668 [2024-11-27 19:17:42.092386] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:19:32.668 [2024-11-27 19:17:42.092456] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:19:32.668 [2024-11-27 19:17:42.092465] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:19:32.668 [2024-11-27 19:17:42.092537] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:32.668 19:17:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.668 19:17:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:32.668 19:17:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:32.668 19:17:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:32.668 19:17:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:32.668 19:17:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:32.668 19:17:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:32.668 19:17:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:32.668 19:17:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:32.668 19:17:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:32.668 19:17:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:32.668 19:17:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:32.668 19:17:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:32.668 19:17:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.668 19:17:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:32.668 19:17:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.668 19:17:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:32.668 "name": "raid_bdev1", 00:19:32.668 "uuid": "f8fb9acc-cb3f-445f-8a02-d9c9ed1ee324", 00:19:32.668 "strip_size_kb": 0, 00:19:32.668 "state": "online", 00:19:32.668 "raid_level": "raid1", 00:19:32.668 "superblock": true, 00:19:32.668 "num_base_bdevs": 2, 00:19:32.668 "num_base_bdevs_discovered": 2, 00:19:32.668 "num_base_bdevs_operational": 2, 00:19:32.668 "base_bdevs_list": [ 00:19:32.668 { 00:19:32.668 "name": "spare", 00:19:32.668 "uuid": "4263d4b1-21f5-52b7-aa4e-bc9db2e7d59e", 00:19:32.668 "is_configured": true, 00:19:32.668 "data_offset": 256, 00:19:32.668 "data_size": 7936 00:19:32.668 }, 00:19:32.668 { 00:19:32.668 "name": "BaseBdev2", 00:19:32.668 "uuid": "4b5ac791-67f8-50d9-a4a4-4e22fe8a272a", 00:19:32.668 "is_configured": true, 00:19:32.668 "data_offset": 256, 00:19:32.668 "data_size": 7936 00:19:32.668 } 00:19:32.668 ] 00:19:32.668 }' 00:19:32.668 19:17:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:32.668 19:17:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:32.928 19:17:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:32.929 19:17:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:32.929 19:17:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:32.929 19:17:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:32.929 19:17:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:32.929 19:17:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:32.929 19:17:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.929 19:17:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:32.929 19:17:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:32.929 19:17:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.188 19:17:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:33.188 "name": "raid_bdev1", 00:19:33.188 "uuid": "f8fb9acc-cb3f-445f-8a02-d9c9ed1ee324", 00:19:33.188 "strip_size_kb": 0, 00:19:33.188 "state": "online", 00:19:33.188 "raid_level": "raid1", 00:19:33.188 "superblock": true, 00:19:33.188 "num_base_bdevs": 2, 00:19:33.188 "num_base_bdevs_discovered": 2, 00:19:33.188 "num_base_bdevs_operational": 2, 00:19:33.188 "base_bdevs_list": [ 00:19:33.188 { 00:19:33.188 "name": "spare", 00:19:33.188 "uuid": "4263d4b1-21f5-52b7-aa4e-bc9db2e7d59e", 00:19:33.188 "is_configured": true, 00:19:33.188 "data_offset": 256, 00:19:33.188 "data_size": 7936 00:19:33.188 }, 00:19:33.188 { 00:19:33.188 "name": "BaseBdev2", 00:19:33.188 "uuid": "4b5ac791-67f8-50d9-a4a4-4e22fe8a272a", 00:19:33.188 "is_configured": true, 00:19:33.188 "data_offset": 256, 00:19:33.188 "data_size": 7936 00:19:33.188 } 00:19:33.188 ] 00:19:33.188 }' 00:19:33.188 19:17:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:33.188 19:17:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:33.188 19:17:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:33.188 19:17:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:33.188 19:17:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:33.188 19:17:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.188 19:17:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:33.188 19:17:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:19:33.188 19:17:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.189 19:17:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:19:33.189 19:17:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:33.189 19:17:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.189 19:17:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:33.189 [2024-11-27 19:17:42.720862] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:33.189 19:17:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.189 19:17:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:33.189 19:17:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:33.189 19:17:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:33.189 19:17:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:33.189 19:17:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:33.189 19:17:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:33.189 19:17:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:33.189 19:17:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:33.189 19:17:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:33.189 19:17:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:33.189 19:17:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:33.189 19:17:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:33.189 19:17:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.189 19:17:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:33.189 19:17:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.189 19:17:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:33.189 "name": "raid_bdev1", 00:19:33.189 "uuid": "f8fb9acc-cb3f-445f-8a02-d9c9ed1ee324", 00:19:33.189 "strip_size_kb": 0, 00:19:33.189 "state": "online", 00:19:33.189 "raid_level": "raid1", 00:19:33.189 "superblock": true, 00:19:33.189 "num_base_bdevs": 2, 00:19:33.189 "num_base_bdevs_discovered": 1, 00:19:33.189 "num_base_bdevs_operational": 1, 00:19:33.189 "base_bdevs_list": [ 00:19:33.189 { 00:19:33.189 "name": null, 00:19:33.189 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:33.189 "is_configured": false, 00:19:33.189 "data_offset": 0, 00:19:33.189 "data_size": 7936 00:19:33.189 }, 00:19:33.189 { 00:19:33.189 "name": "BaseBdev2", 00:19:33.189 "uuid": "4b5ac791-67f8-50d9-a4a4-4e22fe8a272a", 00:19:33.189 "is_configured": true, 00:19:33.189 "data_offset": 256, 00:19:33.189 "data_size": 7936 00:19:33.189 } 00:19:33.189 ] 00:19:33.189 }' 00:19:33.189 19:17:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:33.189 19:17:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:33.759 19:17:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:33.759 19:17:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.759 19:17:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:33.759 [2024-11-27 19:17:43.168118] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:33.759 [2024-11-27 19:17:43.168307] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:19:33.759 [2024-11-27 19:17:43.168369] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:19:33.759 [2024-11-27 19:17:43.168447] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:33.759 [2024-11-27 19:17:43.183079] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:19:33.759 19:17:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.759 19:17:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@757 -- # sleep 1 00:19:33.759 [2024-11-27 19:17:43.184882] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:34.699 19:17:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:34.699 19:17:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:34.699 19:17:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:34.699 19:17:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:34.699 19:17:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:34.699 19:17:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:34.699 19:17:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:34.699 19:17:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.699 19:17:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:34.699 19:17:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.699 19:17:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:34.699 "name": "raid_bdev1", 00:19:34.699 "uuid": "f8fb9acc-cb3f-445f-8a02-d9c9ed1ee324", 00:19:34.699 "strip_size_kb": 0, 00:19:34.699 "state": "online", 00:19:34.699 "raid_level": "raid1", 00:19:34.699 "superblock": true, 00:19:34.699 "num_base_bdevs": 2, 00:19:34.700 "num_base_bdevs_discovered": 2, 00:19:34.700 "num_base_bdevs_operational": 2, 00:19:34.700 "process": { 00:19:34.700 "type": "rebuild", 00:19:34.700 "target": "spare", 00:19:34.700 "progress": { 00:19:34.700 "blocks": 2560, 00:19:34.700 "percent": 32 00:19:34.700 } 00:19:34.700 }, 00:19:34.700 "base_bdevs_list": [ 00:19:34.700 { 00:19:34.700 "name": "spare", 00:19:34.700 "uuid": "4263d4b1-21f5-52b7-aa4e-bc9db2e7d59e", 00:19:34.700 "is_configured": true, 00:19:34.700 "data_offset": 256, 00:19:34.700 "data_size": 7936 00:19:34.700 }, 00:19:34.700 { 00:19:34.700 "name": "BaseBdev2", 00:19:34.700 "uuid": "4b5ac791-67f8-50d9-a4a4-4e22fe8a272a", 00:19:34.700 "is_configured": true, 00:19:34.700 "data_offset": 256, 00:19:34.700 "data_size": 7936 00:19:34.700 } 00:19:34.700 ] 00:19:34.700 }' 00:19:34.700 19:17:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:34.700 19:17:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:34.700 19:17:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:34.700 19:17:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:34.700 19:17:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:19:34.700 19:17:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.700 19:17:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:34.700 [2024-11-27 19:17:44.300817] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:34.959 [2024-11-27 19:17:44.389535] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:34.959 [2024-11-27 19:17:44.389595] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:34.959 [2024-11-27 19:17:44.389608] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:34.959 [2024-11-27 19:17:44.389617] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:34.959 19:17:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.959 19:17:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:34.960 19:17:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:34.960 19:17:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:34.960 19:17:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:34.960 19:17:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:34.960 19:17:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:34.960 19:17:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:34.960 19:17:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:34.960 19:17:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:34.960 19:17:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:34.960 19:17:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:34.960 19:17:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:34.960 19:17:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.960 19:17:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:34.960 19:17:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.960 19:17:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:34.960 "name": "raid_bdev1", 00:19:34.960 "uuid": "f8fb9acc-cb3f-445f-8a02-d9c9ed1ee324", 00:19:34.960 "strip_size_kb": 0, 00:19:34.960 "state": "online", 00:19:34.960 "raid_level": "raid1", 00:19:34.960 "superblock": true, 00:19:34.960 "num_base_bdevs": 2, 00:19:34.960 "num_base_bdevs_discovered": 1, 00:19:34.960 "num_base_bdevs_operational": 1, 00:19:34.960 "base_bdevs_list": [ 00:19:34.960 { 00:19:34.960 "name": null, 00:19:34.960 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:34.960 "is_configured": false, 00:19:34.960 "data_offset": 0, 00:19:34.960 "data_size": 7936 00:19:34.960 }, 00:19:34.960 { 00:19:34.960 "name": "BaseBdev2", 00:19:34.960 "uuid": "4b5ac791-67f8-50d9-a4a4-4e22fe8a272a", 00:19:34.960 "is_configured": true, 00:19:34.960 "data_offset": 256, 00:19:34.960 "data_size": 7936 00:19:34.960 } 00:19:34.960 ] 00:19:34.960 }' 00:19:34.960 19:17:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:34.960 19:17:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:35.530 19:17:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:35.530 19:17:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.530 19:17:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:35.530 [2024-11-27 19:17:44.893786] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:35.530 [2024-11-27 19:17:44.893908] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:35.530 [2024-11-27 19:17:44.893951] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:19:35.530 [2024-11-27 19:17:44.893982] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:35.530 [2024-11-27 19:17:44.894165] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:35.530 [2024-11-27 19:17:44.894218] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:35.530 [2024-11-27 19:17:44.894285] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:19:35.530 [2024-11-27 19:17:44.894322] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:19:35.530 [2024-11-27 19:17:44.894357] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:19:35.530 [2024-11-27 19:17:44.894401] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:35.530 [2024-11-27 19:17:44.909211] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:19:35.530 spare 00:19:35.530 19:17:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.530 [2024-11-27 19:17:44.910957] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:35.530 19:17:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@764 -- # sleep 1 00:19:36.476 19:17:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:36.476 19:17:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:36.476 19:17:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:36.476 19:17:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:36.476 19:17:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:36.476 19:17:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:36.476 19:17:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:36.476 19:17:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.476 19:17:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:36.476 19:17:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.476 19:17:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:36.476 "name": "raid_bdev1", 00:19:36.476 "uuid": "f8fb9acc-cb3f-445f-8a02-d9c9ed1ee324", 00:19:36.476 "strip_size_kb": 0, 00:19:36.476 "state": "online", 00:19:36.476 "raid_level": "raid1", 00:19:36.476 "superblock": true, 00:19:36.476 "num_base_bdevs": 2, 00:19:36.476 "num_base_bdevs_discovered": 2, 00:19:36.476 "num_base_bdevs_operational": 2, 00:19:36.476 "process": { 00:19:36.476 "type": "rebuild", 00:19:36.477 "target": "spare", 00:19:36.477 "progress": { 00:19:36.477 "blocks": 2560, 00:19:36.477 "percent": 32 00:19:36.477 } 00:19:36.477 }, 00:19:36.477 "base_bdevs_list": [ 00:19:36.477 { 00:19:36.477 "name": "spare", 00:19:36.477 "uuid": "4263d4b1-21f5-52b7-aa4e-bc9db2e7d59e", 00:19:36.477 "is_configured": true, 00:19:36.477 "data_offset": 256, 00:19:36.477 "data_size": 7936 00:19:36.477 }, 00:19:36.477 { 00:19:36.477 "name": "BaseBdev2", 00:19:36.477 "uuid": "4b5ac791-67f8-50d9-a4a4-4e22fe8a272a", 00:19:36.477 "is_configured": true, 00:19:36.477 "data_offset": 256, 00:19:36.477 "data_size": 7936 00:19:36.477 } 00:19:36.477 ] 00:19:36.477 }' 00:19:36.477 19:17:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:36.477 19:17:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:36.477 19:17:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:36.477 19:17:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:36.477 19:17:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:19:36.477 19:17:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.477 19:17:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:36.477 [2024-11-27 19:17:46.074671] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:36.751 [2024-11-27 19:17:46.115543] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:36.751 [2024-11-27 19:17:46.115640] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:36.751 [2024-11-27 19:17:46.115673] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:36.751 [2024-11-27 19:17:46.115700] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:36.751 19:17:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.751 19:17:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:36.751 19:17:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:36.751 19:17:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:36.751 19:17:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:36.751 19:17:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:36.751 19:17:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:36.751 19:17:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:36.751 19:17:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:36.751 19:17:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:36.751 19:17:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:36.751 19:17:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:36.751 19:17:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.751 19:17:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:36.751 19:17:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:36.751 19:17:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.751 19:17:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:36.751 "name": "raid_bdev1", 00:19:36.751 "uuid": "f8fb9acc-cb3f-445f-8a02-d9c9ed1ee324", 00:19:36.751 "strip_size_kb": 0, 00:19:36.751 "state": "online", 00:19:36.751 "raid_level": "raid1", 00:19:36.751 "superblock": true, 00:19:36.751 "num_base_bdevs": 2, 00:19:36.751 "num_base_bdevs_discovered": 1, 00:19:36.751 "num_base_bdevs_operational": 1, 00:19:36.751 "base_bdevs_list": [ 00:19:36.751 { 00:19:36.751 "name": null, 00:19:36.751 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:36.751 "is_configured": false, 00:19:36.751 "data_offset": 0, 00:19:36.751 "data_size": 7936 00:19:36.751 }, 00:19:36.751 { 00:19:36.751 "name": "BaseBdev2", 00:19:36.751 "uuid": "4b5ac791-67f8-50d9-a4a4-4e22fe8a272a", 00:19:36.751 "is_configured": true, 00:19:36.751 "data_offset": 256, 00:19:36.751 "data_size": 7936 00:19:36.751 } 00:19:36.751 ] 00:19:36.751 }' 00:19:36.751 19:17:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:36.751 19:17:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:37.020 19:17:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:37.020 19:17:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:37.020 19:17:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:37.020 19:17:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:37.020 19:17:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:37.020 19:17:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:37.020 19:17:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.020 19:17:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:37.020 19:17:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:37.020 19:17:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.020 19:17:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:37.020 "name": "raid_bdev1", 00:19:37.020 "uuid": "f8fb9acc-cb3f-445f-8a02-d9c9ed1ee324", 00:19:37.020 "strip_size_kb": 0, 00:19:37.020 "state": "online", 00:19:37.020 "raid_level": "raid1", 00:19:37.020 "superblock": true, 00:19:37.020 "num_base_bdevs": 2, 00:19:37.020 "num_base_bdevs_discovered": 1, 00:19:37.020 "num_base_bdevs_operational": 1, 00:19:37.020 "base_bdevs_list": [ 00:19:37.020 { 00:19:37.020 "name": null, 00:19:37.020 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:37.020 "is_configured": false, 00:19:37.020 "data_offset": 0, 00:19:37.020 "data_size": 7936 00:19:37.020 }, 00:19:37.020 { 00:19:37.020 "name": "BaseBdev2", 00:19:37.020 "uuid": "4b5ac791-67f8-50d9-a4a4-4e22fe8a272a", 00:19:37.020 "is_configured": true, 00:19:37.020 "data_offset": 256, 00:19:37.020 "data_size": 7936 00:19:37.020 } 00:19:37.020 ] 00:19:37.020 }' 00:19:37.020 19:17:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:37.280 19:17:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:37.280 19:17:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:37.280 19:17:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:37.280 19:17:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:19:37.280 19:17:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.280 19:17:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:37.280 19:17:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.280 19:17:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:37.280 19:17:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.280 19:17:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:37.280 [2024-11-27 19:17:46.707797] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:37.280 [2024-11-27 19:17:46.707856] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:37.280 [2024-11-27 19:17:46.707875] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:19:37.280 [2024-11-27 19:17:46.707883] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:37.280 [2024-11-27 19:17:46.708053] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:37.280 [2024-11-27 19:17:46.708068] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:37.280 [2024-11-27 19:17:46.708113] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:19:37.280 [2024-11-27 19:17:46.708126] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:19:37.280 [2024-11-27 19:17:46.708134] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:19:37.280 [2024-11-27 19:17:46.708144] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:19:37.280 BaseBdev1 00:19:37.280 19:17:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.280 19:17:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@775 -- # sleep 1 00:19:38.218 19:17:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:38.218 19:17:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:38.219 19:17:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:38.219 19:17:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:38.219 19:17:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:38.219 19:17:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:38.219 19:17:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:38.219 19:17:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:38.219 19:17:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:38.219 19:17:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:38.219 19:17:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:38.219 19:17:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.219 19:17:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:38.219 19:17:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:38.219 19:17:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.219 19:17:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:38.219 "name": "raid_bdev1", 00:19:38.219 "uuid": "f8fb9acc-cb3f-445f-8a02-d9c9ed1ee324", 00:19:38.219 "strip_size_kb": 0, 00:19:38.219 "state": "online", 00:19:38.219 "raid_level": "raid1", 00:19:38.219 "superblock": true, 00:19:38.219 "num_base_bdevs": 2, 00:19:38.219 "num_base_bdevs_discovered": 1, 00:19:38.219 "num_base_bdevs_operational": 1, 00:19:38.219 "base_bdevs_list": [ 00:19:38.219 { 00:19:38.219 "name": null, 00:19:38.219 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:38.219 "is_configured": false, 00:19:38.219 "data_offset": 0, 00:19:38.219 "data_size": 7936 00:19:38.219 }, 00:19:38.219 { 00:19:38.219 "name": "BaseBdev2", 00:19:38.219 "uuid": "4b5ac791-67f8-50d9-a4a4-4e22fe8a272a", 00:19:38.219 "is_configured": true, 00:19:38.219 "data_offset": 256, 00:19:38.219 "data_size": 7936 00:19:38.219 } 00:19:38.219 ] 00:19:38.219 }' 00:19:38.219 19:17:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:38.219 19:17:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:38.788 19:17:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:38.788 19:17:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:38.788 19:17:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:38.788 19:17:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:38.788 19:17:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:38.788 19:17:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:38.788 19:17:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:38.788 19:17:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.788 19:17:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:38.788 19:17:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.788 19:17:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:38.788 "name": "raid_bdev1", 00:19:38.788 "uuid": "f8fb9acc-cb3f-445f-8a02-d9c9ed1ee324", 00:19:38.788 "strip_size_kb": 0, 00:19:38.788 "state": "online", 00:19:38.788 "raid_level": "raid1", 00:19:38.788 "superblock": true, 00:19:38.788 "num_base_bdevs": 2, 00:19:38.788 "num_base_bdevs_discovered": 1, 00:19:38.788 "num_base_bdevs_operational": 1, 00:19:38.788 "base_bdevs_list": [ 00:19:38.788 { 00:19:38.788 "name": null, 00:19:38.788 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:38.788 "is_configured": false, 00:19:38.788 "data_offset": 0, 00:19:38.788 "data_size": 7936 00:19:38.788 }, 00:19:38.788 { 00:19:38.788 "name": "BaseBdev2", 00:19:38.788 "uuid": "4b5ac791-67f8-50d9-a4a4-4e22fe8a272a", 00:19:38.788 "is_configured": true, 00:19:38.788 "data_offset": 256, 00:19:38.788 "data_size": 7936 00:19:38.788 } 00:19:38.788 ] 00:19:38.788 }' 00:19:38.788 19:17:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:38.788 19:17:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:38.788 19:17:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:38.788 19:17:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:38.788 19:17:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:38.788 19:17:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@652 -- # local es=0 00:19:38.788 19:17:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:38.789 19:17:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:19:38.789 19:17:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:38.789 19:17:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:19:38.789 19:17:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:38.789 19:17:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:38.789 19:17:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.789 19:17:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:38.789 [2024-11-27 19:17:48.333489] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:38.789 [2024-11-27 19:17:48.333679] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:19:38.789 [2024-11-27 19:17:48.333714] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:19:38.789 request: 00:19:38.789 { 00:19:38.789 "base_bdev": "BaseBdev1", 00:19:38.789 "raid_bdev": "raid_bdev1", 00:19:38.789 "method": "bdev_raid_add_base_bdev", 00:19:38.789 "req_id": 1 00:19:38.789 } 00:19:38.789 Got JSON-RPC error response 00:19:38.789 response: 00:19:38.789 { 00:19:38.789 "code": -22, 00:19:38.789 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:19:38.789 } 00:19:38.789 19:17:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:19:38.789 19:17:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@655 -- # es=1 00:19:38.789 19:17:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:38.789 19:17:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:38.789 19:17:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:38.789 19:17:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@779 -- # sleep 1 00:19:39.728 19:17:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:39.728 19:17:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:39.728 19:17:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:39.728 19:17:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:39.728 19:17:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:39.728 19:17:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:39.728 19:17:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:39.728 19:17:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:39.728 19:17:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:39.728 19:17:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:39.728 19:17:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:39.728 19:17:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:39.728 19:17:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.728 19:17:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:39.987 19:17:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.988 19:17:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:39.988 "name": "raid_bdev1", 00:19:39.988 "uuid": "f8fb9acc-cb3f-445f-8a02-d9c9ed1ee324", 00:19:39.988 "strip_size_kb": 0, 00:19:39.988 "state": "online", 00:19:39.988 "raid_level": "raid1", 00:19:39.988 "superblock": true, 00:19:39.988 "num_base_bdevs": 2, 00:19:39.988 "num_base_bdevs_discovered": 1, 00:19:39.988 "num_base_bdevs_operational": 1, 00:19:39.988 "base_bdevs_list": [ 00:19:39.988 { 00:19:39.988 "name": null, 00:19:39.988 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:39.988 "is_configured": false, 00:19:39.988 "data_offset": 0, 00:19:39.988 "data_size": 7936 00:19:39.988 }, 00:19:39.988 { 00:19:39.988 "name": "BaseBdev2", 00:19:39.988 "uuid": "4b5ac791-67f8-50d9-a4a4-4e22fe8a272a", 00:19:39.988 "is_configured": true, 00:19:39.988 "data_offset": 256, 00:19:39.988 "data_size": 7936 00:19:39.988 } 00:19:39.988 ] 00:19:39.988 }' 00:19:39.988 19:17:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:39.988 19:17:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:40.248 19:17:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:40.248 19:17:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:40.248 19:17:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:40.248 19:17:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:40.248 19:17:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:40.248 19:17:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:40.248 19:17:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.248 19:17:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:40.248 19:17:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:40.248 19:17:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.508 19:17:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:40.508 "name": "raid_bdev1", 00:19:40.508 "uuid": "f8fb9acc-cb3f-445f-8a02-d9c9ed1ee324", 00:19:40.508 "strip_size_kb": 0, 00:19:40.508 "state": "online", 00:19:40.508 "raid_level": "raid1", 00:19:40.508 "superblock": true, 00:19:40.508 "num_base_bdevs": 2, 00:19:40.508 "num_base_bdevs_discovered": 1, 00:19:40.508 "num_base_bdevs_operational": 1, 00:19:40.508 "base_bdevs_list": [ 00:19:40.508 { 00:19:40.508 "name": null, 00:19:40.508 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:40.508 "is_configured": false, 00:19:40.508 "data_offset": 0, 00:19:40.508 "data_size": 7936 00:19:40.508 }, 00:19:40.508 { 00:19:40.508 "name": "BaseBdev2", 00:19:40.508 "uuid": "4b5ac791-67f8-50d9-a4a4-4e22fe8a272a", 00:19:40.508 "is_configured": true, 00:19:40.508 "data_offset": 256, 00:19:40.508 "data_size": 7936 00:19:40.508 } 00:19:40.508 ] 00:19:40.508 }' 00:19:40.508 19:17:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:40.508 19:17:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:40.508 19:17:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:40.508 19:17:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:40.508 19:17:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@784 -- # killprocess 89096 00:19:40.508 19:17:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 89096 ']' 00:19:40.508 19:17:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 89096 00:19:40.508 19:17:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:19:40.509 19:17:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:40.509 19:17:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89096 00:19:40.509 19:17:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:40.509 19:17:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:40.509 19:17:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89096' 00:19:40.509 killing process with pid 89096 00:19:40.509 Received shutdown signal, test time was about 60.000000 seconds 00:19:40.509 00:19:40.509 Latency(us) 00:19:40.509 [2024-11-27T19:17:50.145Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:40.509 [2024-11-27T19:17:50.145Z] =================================================================================================================== 00:19:40.509 [2024-11-27T19:17:50.145Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:40.509 19:17:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@973 -- # kill 89096 00:19:40.509 [2024-11-27 19:17:50.041118] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:40.509 [2024-11-27 19:17:50.041224] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:40.509 [2024-11-27 19:17:50.041266] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:40.509 [2024-11-27 19:17:50.041276] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:19:40.509 19:17:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@978 -- # wait 89096 00:19:40.769 [2024-11-27 19:17:50.318718] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:42.151 19:17:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@786 -- # return 0 00:19:42.151 00:19:42.151 real 0m17.457s 00:19:42.151 user 0m22.881s 00:19:42.151 sys 0m1.683s 00:19:42.151 ************************************ 00:19:42.151 END TEST raid_rebuild_test_sb_md_interleaved 00:19:42.151 ************************************ 00:19:42.151 19:17:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:42.151 19:17:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:42.151 19:17:51 bdev_raid -- bdev/bdev_raid.sh@1015 -- # trap - EXIT 00:19:42.151 19:17:51 bdev_raid -- bdev/bdev_raid.sh@1016 -- # cleanup 00:19:42.151 19:17:51 bdev_raid -- bdev/bdev_raid.sh@56 -- # '[' -n 89096 ']' 00:19:42.151 19:17:51 bdev_raid -- bdev/bdev_raid.sh@56 -- # ps -p 89096 00:19:42.151 19:17:51 bdev_raid -- bdev/bdev_raid.sh@60 -- # rm -rf /raidtest 00:19:42.151 00:19:42.151 real 12m3.376s 00:19:42.151 user 16m4.358s 00:19:42.151 sys 2m2.803s 00:19:42.151 19:17:51 bdev_raid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:42.151 ************************************ 00:19:42.151 END TEST bdev_raid 00:19:42.151 ************************************ 00:19:42.151 19:17:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:42.151 19:17:51 -- spdk/autotest.sh@190 -- # run_test spdkcli_raid /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:19:42.151 19:17:51 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:42.151 19:17:51 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:42.151 19:17:51 -- common/autotest_common.sh@10 -- # set +x 00:19:42.151 ************************************ 00:19:42.151 START TEST spdkcli_raid 00:19:42.151 ************************************ 00:19:42.151 19:17:51 spdkcli_raid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:19:42.151 * Looking for test storage... 00:19:42.151 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:19:42.151 19:17:51 spdkcli_raid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:42.151 19:17:51 spdkcli_raid -- common/autotest_common.sh@1693 -- # lcov --version 00:19:42.151 19:17:51 spdkcli_raid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:42.151 19:17:51 spdkcli_raid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:42.151 19:17:51 spdkcli_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:42.151 19:17:51 spdkcli_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:42.151 19:17:51 spdkcli_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:42.151 19:17:51 spdkcli_raid -- scripts/common.sh@336 -- # IFS=.-: 00:19:42.151 19:17:51 spdkcli_raid -- scripts/common.sh@336 -- # read -ra ver1 00:19:42.151 19:17:51 spdkcli_raid -- scripts/common.sh@337 -- # IFS=.-: 00:19:42.151 19:17:51 spdkcli_raid -- scripts/common.sh@337 -- # read -ra ver2 00:19:42.151 19:17:51 spdkcli_raid -- scripts/common.sh@338 -- # local 'op=<' 00:19:42.151 19:17:51 spdkcli_raid -- scripts/common.sh@340 -- # ver1_l=2 00:19:42.151 19:17:51 spdkcli_raid -- scripts/common.sh@341 -- # ver2_l=1 00:19:42.151 19:17:51 spdkcli_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:42.151 19:17:51 spdkcli_raid -- scripts/common.sh@344 -- # case "$op" in 00:19:42.151 19:17:51 spdkcli_raid -- scripts/common.sh@345 -- # : 1 00:19:42.151 19:17:51 spdkcli_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:42.151 19:17:51 spdkcli_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:42.151 19:17:51 spdkcli_raid -- scripts/common.sh@365 -- # decimal 1 00:19:42.151 19:17:51 spdkcli_raid -- scripts/common.sh@353 -- # local d=1 00:19:42.151 19:17:51 spdkcli_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:42.151 19:17:51 spdkcli_raid -- scripts/common.sh@355 -- # echo 1 00:19:42.151 19:17:51 spdkcli_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:19:42.151 19:17:51 spdkcli_raid -- scripts/common.sh@366 -- # decimal 2 00:19:42.151 19:17:51 spdkcli_raid -- scripts/common.sh@353 -- # local d=2 00:19:42.151 19:17:51 spdkcli_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:42.151 19:17:51 spdkcli_raid -- scripts/common.sh@355 -- # echo 2 00:19:42.151 19:17:51 spdkcli_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:19:42.151 19:17:51 spdkcli_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:42.151 19:17:51 spdkcli_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:42.151 19:17:51 spdkcli_raid -- scripts/common.sh@368 -- # return 0 00:19:42.151 19:17:51 spdkcli_raid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:42.151 19:17:51 spdkcli_raid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:42.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:42.151 --rc genhtml_branch_coverage=1 00:19:42.151 --rc genhtml_function_coverage=1 00:19:42.151 --rc genhtml_legend=1 00:19:42.151 --rc geninfo_all_blocks=1 00:19:42.151 --rc geninfo_unexecuted_blocks=1 00:19:42.151 00:19:42.151 ' 00:19:42.151 19:17:51 spdkcli_raid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:42.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:42.151 --rc genhtml_branch_coverage=1 00:19:42.151 --rc genhtml_function_coverage=1 00:19:42.151 --rc genhtml_legend=1 00:19:42.151 --rc geninfo_all_blocks=1 00:19:42.151 --rc geninfo_unexecuted_blocks=1 00:19:42.151 00:19:42.151 ' 00:19:42.151 19:17:51 spdkcli_raid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:42.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:42.151 --rc genhtml_branch_coverage=1 00:19:42.151 --rc genhtml_function_coverage=1 00:19:42.151 --rc genhtml_legend=1 00:19:42.151 --rc geninfo_all_blocks=1 00:19:42.151 --rc geninfo_unexecuted_blocks=1 00:19:42.151 00:19:42.151 ' 00:19:42.151 19:17:51 spdkcli_raid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:42.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:42.151 --rc genhtml_branch_coverage=1 00:19:42.151 --rc genhtml_function_coverage=1 00:19:42.151 --rc genhtml_legend=1 00:19:42.151 --rc geninfo_all_blocks=1 00:19:42.151 --rc geninfo_unexecuted_blocks=1 00:19:42.151 00:19:42.151 ' 00:19:42.151 19:17:51 spdkcli_raid -- spdkcli/raid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:19:42.151 19:17:51 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:19:42.152 19:17:51 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:19:42.152 19:17:51 spdkcli_raid -- spdkcli/raid.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:19:42.152 19:17:51 spdkcli_raid -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:19:42.152 19:17:51 spdkcli_raid -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:19:42.152 19:17:51 spdkcli_raid -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:19:42.152 19:17:51 spdkcli_raid -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:19:42.152 19:17:51 spdkcli_raid -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:19:42.152 19:17:51 spdkcli_raid -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:19:42.152 19:17:51 spdkcli_raid -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:19:42.152 19:17:51 spdkcli_raid -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:19:42.152 19:17:51 spdkcli_raid -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:19:42.152 19:17:51 spdkcli_raid -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:19:42.152 19:17:51 spdkcli_raid -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:19:42.152 19:17:51 spdkcli_raid -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:19:42.152 19:17:51 spdkcli_raid -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:19:42.152 19:17:51 spdkcli_raid -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:19:42.152 19:17:51 spdkcli_raid -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:19:42.152 19:17:51 spdkcli_raid -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:19:42.152 19:17:51 spdkcli_raid -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:19:42.152 19:17:51 spdkcli_raid -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:19:42.152 19:17:51 spdkcli_raid -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:19:42.152 19:17:51 spdkcli_raid -- spdkcli/raid.sh@12 -- # MATCH_FILE=spdkcli_raid.test 00:19:42.152 19:17:51 spdkcli_raid -- spdkcli/raid.sh@13 -- # SPDKCLI_BRANCH=/bdevs 00:19:42.152 19:17:51 spdkcli_raid -- spdkcli/raid.sh@14 -- # dirname /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:19:42.152 19:17:51 spdkcli_raid -- spdkcli/raid.sh@14 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/spdkcli 00:19:42.152 19:17:51 spdkcli_raid -- spdkcli/raid.sh@14 -- # testdir=/home/vagrant/spdk_repo/spdk/test/spdkcli 00:19:42.152 19:17:51 spdkcli_raid -- spdkcli/raid.sh@15 -- # . /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:19:42.412 19:17:51 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:19:42.412 19:17:51 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:19:42.412 19:17:51 spdkcli_raid -- spdkcli/raid.sh@17 -- # trap cleanup EXIT 00:19:42.412 19:17:51 spdkcli_raid -- spdkcli/raid.sh@19 -- # timing_enter run_spdk_tgt 00:19:42.412 19:17:51 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:42.412 19:17:51 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:42.412 19:17:51 spdkcli_raid -- spdkcli/raid.sh@20 -- # run_spdk_tgt 00:19:42.412 19:17:51 spdkcli_raid -- spdkcli/common.sh@27 -- # spdk_tgt_pid=89769 00:19:42.412 19:17:51 spdkcli_raid -- spdkcli/common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:19:42.412 19:17:51 spdkcli_raid -- spdkcli/common.sh@28 -- # waitforlisten 89769 00:19:42.412 19:17:51 spdkcli_raid -- common/autotest_common.sh@835 -- # '[' -z 89769 ']' 00:19:42.412 19:17:51 spdkcli_raid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:42.412 19:17:51 spdkcli_raid -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:42.412 19:17:51 spdkcli_raid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:42.412 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:42.412 19:17:51 spdkcli_raid -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:42.412 19:17:51 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:42.412 [2024-11-27 19:17:51.905557] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:19:42.412 [2024-11-27 19:17:51.905783] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89769 ] 00:19:42.672 [2024-11-27 19:17:52.085776] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:42.672 [2024-11-27 19:17:52.196222] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:42.672 [2024-11-27 19:17:52.196254] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:43.613 19:17:53 spdkcli_raid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:43.613 19:17:53 spdkcli_raid -- common/autotest_common.sh@868 -- # return 0 00:19:43.613 19:17:53 spdkcli_raid -- spdkcli/raid.sh@21 -- # timing_exit run_spdk_tgt 00:19:43.613 19:17:53 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:43.613 19:17:53 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:43.613 19:17:53 spdkcli_raid -- spdkcli/raid.sh@23 -- # timing_enter spdkcli_create_malloc 00:19:43.613 19:17:53 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:43.613 19:17:53 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:43.613 19:17:53 spdkcli_raid -- spdkcli/raid.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 8 512 Malloc1'\'' '\''Malloc1'\'' True 00:19:43.613 '\''/bdevs/malloc create 8 512 Malloc2'\'' '\''Malloc2'\'' True 00:19:43.613 ' 00:19:44.996 Executing command: ['/bdevs/malloc create 8 512 Malloc1', 'Malloc1', True] 00:19:44.996 Executing command: ['/bdevs/malloc create 8 512 Malloc2', 'Malloc2', True] 00:19:45.262 19:17:54 spdkcli_raid -- spdkcli/raid.sh@27 -- # timing_exit spdkcli_create_malloc 00:19:45.262 19:17:54 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:45.262 19:17:54 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:45.262 19:17:54 spdkcli_raid -- spdkcli/raid.sh@29 -- # timing_enter spdkcli_create_raid 00:19:45.262 19:17:54 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:45.262 19:17:54 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:45.262 19:17:54 spdkcli_raid -- spdkcli/raid.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4'\'' '\''testraid'\'' True 00:19:45.262 ' 00:19:46.202 Executing command: ['/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4', 'testraid', True] 00:19:46.462 19:17:55 spdkcli_raid -- spdkcli/raid.sh@32 -- # timing_exit spdkcli_create_raid 00:19:46.462 19:17:55 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:46.462 19:17:55 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:46.462 19:17:55 spdkcli_raid -- spdkcli/raid.sh@34 -- # timing_enter spdkcli_check_match 00:19:46.462 19:17:55 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:46.462 19:17:55 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:46.462 19:17:55 spdkcli_raid -- spdkcli/raid.sh@35 -- # check_match 00:19:46.462 19:17:55 spdkcli_raid -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /bdevs 00:19:47.033 19:17:56 spdkcli_raid -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test.match 00:19:47.033 19:17:56 spdkcli_raid -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test 00:19:47.033 19:17:56 spdkcli_raid -- spdkcli/raid.sh@36 -- # timing_exit spdkcli_check_match 00:19:47.033 19:17:56 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:47.033 19:17:56 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:47.033 19:17:56 spdkcli_raid -- spdkcli/raid.sh@38 -- # timing_enter spdkcli_delete_raid 00:19:47.033 19:17:56 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:47.033 19:17:56 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:47.033 19:17:56 spdkcli_raid -- spdkcli/raid.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume delete testraid'\'' '\'''\'' True 00:19:47.033 ' 00:19:47.973 Executing command: ['/bdevs/raid_volume delete testraid', '', True] 00:19:48.234 19:17:57 spdkcli_raid -- spdkcli/raid.sh@41 -- # timing_exit spdkcli_delete_raid 00:19:48.234 19:17:57 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:48.234 19:17:57 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:48.234 19:17:57 spdkcli_raid -- spdkcli/raid.sh@43 -- # timing_enter spdkcli_delete_malloc 00:19:48.234 19:17:57 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:48.234 19:17:57 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:48.234 19:17:57 spdkcli_raid -- spdkcli/raid.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc delete Malloc1'\'' '\'''\'' True 00:19:48.234 '\''/bdevs/malloc delete Malloc2'\'' '\'''\'' True 00:19:48.234 ' 00:19:49.615 Executing command: ['/bdevs/malloc delete Malloc1', '', True] 00:19:49.615 Executing command: ['/bdevs/malloc delete Malloc2', '', True] 00:19:49.615 19:17:59 spdkcli_raid -- spdkcli/raid.sh@47 -- # timing_exit spdkcli_delete_malloc 00:19:49.615 19:17:59 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:49.615 19:17:59 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:49.874 19:17:59 spdkcli_raid -- spdkcli/raid.sh@49 -- # killprocess 89769 00:19:49.874 19:17:59 spdkcli_raid -- common/autotest_common.sh@954 -- # '[' -z 89769 ']' 00:19:49.874 19:17:59 spdkcli_raid -- common/autotest_common.sh@958 -- # kill -0 89769 00:19:49.874 19:17:59 spdkcli_raid -- common/autotest_common.sh@959 -- # uname 00:19:49.874 19:17:59 spdkcli_raid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:49.874 19:17:59 spdkcli_raid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89769 00:19:49.874 19:17:59 spdkcli_raid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:49.874 19:17:59 spdkcli_raid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:49.874 19:17:59 spdkcli_raid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89769' 00:19:49.874 killing process with pid 89769 00:19:49.874 19:17:59 spdkcli_raid -- common/autotest_common.sh@973 -- # kill 89769 00:19:49.874 19:17:59 spdkcli_raid -- common/autotest_common.sh@978 -- # wait 89769 00:19:52.414 19:18:01 spdkcli_raid -- spdkcli/raid.sh@1 -- # cleanup 00:19:52.414 19:18:01 spdkcli_raid -- spdkcli/common.sh@10 -- # '[' -n 89769 ']' 00:19:52.414 19:18:01 spdkcli_raid -- spdkcli/common.sh@11 -- # killprocess 89769 00:19:52.414 19:18:01 spdkcli_raid -- common/autotest_common.sh@954 -- # '[' -z 89769 ']' 00:19:52.414 19:18:01 spdkcli_raid -- common/autotest_common.sh@958 -- # kill -0 89769 00:19:52.414 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (89769) - No such process 00:19:52.414 19:18:01 spdkcli_raid -- common/autotest_common.sh@981 -- # echo 'Process with pid 89769 is not found' 00:19:52.414 Process with pid 89769 is not found 00:19:52.414 19:18:01 spdkcli_raid -- spdkcli/common.sh@13 -- # '[' -n '' ']' 00:19:52.414 19:18:01 spdkcli_raid -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:19:52.414 19:18:01 spdkcli_raid -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:19:52.414 19:18:01 spdkcli_raid -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_raid.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:19:52.414 00:19:52.414 real 0m10.023s 00:19:52.414 user 0m20.531s 00:19:52.414 sys 0m1.236s 00:19:52.414 19:18:01 spdkcli_raid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:52.414 19:18:01 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:52.414 ************************************ 00:19:52.414 END TEST spdkcli_raid 00:19:52.414 ************************************ 00:19:52.414 19:18:01 -- spdk/autotest.sh@191 -- # run_test blockdev_raid5f /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:19:52.414 19:18:01 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:52.414 19:18:01 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:52.414 19:18:01 -- common/autotest_common.sh@10 -- # set +x 00:19:52.414 ************************************ 00:19:52.414 START TEST blockdev_raid5f 00:19:52.414 ************************************ 00:19:52.414 19:18:01 blockdev_raid5f -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:19:52.414 * Looking for test storage... 00:19:52.414 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:19:52.414 19:18:01 blockdev_raid5f -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:52.414 19:18:01 blockdev_raid5f -- common/autotest_common.sh@1693 -- # lcov --version 00:19:52.414 19:18:01 blockdev_raid5f -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:52.414 19:18:01 blockdev_raid5f -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:52.414 19:18:01 blockdev_raid5f -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:52.414 19:18:01 blockdev_raid5f -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:52.414 19:18:01 blockdev_raid5f -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:52.414 19:18:01 blockdev_raid5f -- scripts/common.sh@336 -- # IFS=.-: 00:19:52.414 19:18:01 blockdev_raid5f -- scripts/common.sh@336 -- # read -ra ver1 00:19:52.414 19:18:01 blockdev_raid5f -- scripts/common.sh@337 -- # IFS=.-: 00:19:52.414 19:18:01 blockdev_raid5f -- scripts/common.sh@337 -- # read -ra ver2 00:19:52.414 19:18:01 blockdev_raid5f -- scripts/common.sh@338 -- # local 'op=<' 00:19:52.414 19:18:01 blockdev_raid5f -- scripts/common.sh@340 -- # ver1_l=2 00:19:52.414 19:18:01 blockdev_raid5f -- scripts/common.sh@341 -- # ver2_l=1 00:19:52.414 19:18:01 blockdev_raid5f -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:52.414 19:18:01 blockdev_raid5f -- scripts/common.sh@344 -- # case "$op" in 00:19:52.414 19:18:01 blockdev_raid5f -- scripts/common.sh@345 -- # : 1 00:19:52.414 19:18:01 blockdev_raid5f -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:52.414 19:18:01 blockdev_raid5f -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:52.414 19:18:01 blockdev_raid5f -- scripts/common.sh@365 -- # decimal 1 00:19:52.414 19:18:01 blockdev_raid5f -- scripts/common.sh@353 -- # local d=1 00:19:52.414 19:18:01 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:52.414 19:18:01 blockdev_raid5f -- scripts/common.sh@355 -- # echo 1 00:19:52.414 19:18:01 blockdev_raid5f -- scripts/common.sh@365 -- # ver1[v]=1 00:19:52.414 19:18:01 blockdev_raid5f -- scripts/common.sh@366 -- # decimal 2 00:19:52.414 19:18:01 blockdev_raid5f -- scripts/common.sh@353 -- # local d=2 00:19:52.414 19:18:01 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:52.414 19:18:01 blockdev_raid5f -- scripts/common.sh@355 -- # echo 2 00:19:52.414 19:18:01 blockdev_raid5f -- scripts/common.sh@366 -- # ver2[v]=2 00:19:52.414 19:18:01 blockdev_raid5f -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:52.414 19:18:01 blockdev_raid5f -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:52.414 19:18:01 blockdev_raid5f -- scripts/common.sh@368 -- # return 0 00:19:52.414 19:18:01 blockdev_raid5f -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:52.414 19:18:01 blockdev_raid5f -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:52.414 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:52.414 --rc genhtml_branch_coverage=1 00:19:52.414 --rc genhtml_function_coverage=1 00:19:52.414 --rc genhtml_legend=1 00:19:52.414 --rc geninfo_all_blocks=1 00:19:52.414 --rc geninfo_unexecuted_blocks=1 00:19:52.414 00:19:52.414 ' 00:19:52.414 19:18:01 blockdev_raid5f -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:52.414 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:52.414 --rc genhtml_branch_coverage=1 00:19:52.414 --rc genhtml_function_coverage=1 00:19:52.414 --rc genhtml_legend=1 00:19:52.414 --rc geninfo_all_blocks=1 00:19:52.414 --rc geninfo_unexecuted_blocks=1 00:19:52.414 00:19:52.414 ' 00:19:52.414 19:18:01 blockdev_raid5f -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:52.414 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:52.414 --rc genhtml_branch_coverage=1 00:19:52.414 --rc genhtml_function_coverage=1 00:19:52.414 --rc genhtml_legend=1 00:19:52.414 --rc geninfo_all_blocks=1 00:19:52.414 --rc geninfo_unexecuted_blocks=1 00:19:52.414 00:19:52.414 ' 00:19:52.414 19:18:01 blockdev_raid5f -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:52.414 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:52.414 --rc genhtml_branch_coverage=1 00:19:52.414 --rc genhtml_function_coverage=1 00:19:52.414 --rc genhtml_legend=1 00:19:52.415 --rc geninfo_all_blocks=1 00:19:52.415 --rc geninfo_unexecuted_blocks=1 00:19:52.415 00:19:52.415 ' 00:19:52.415 19:18:01 blockdev_raid5f -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:19:52.415 19:18:01 blockdev_raid5f -- bdev/nbd_common.sh@6 -- # set -e 00:19:52.415 19:18:01 blockdev_raid5f -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:19:52.415 19:18:01 blockdev_raid5f -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:19:52.415 19:18:01 blockdev_raid5f -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:19:52.415 19:18:01 blockdev_raid5f -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:19:52.415 19:18:01 blockdev_raid5f -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:19:52.415 19:18:01 blockdev_raid5f -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:19:52.415 19:18:01 blockdev_raid5f -- bdev/blockdev.sh@20 -- # : 00:19:52.415 19:18:01 blockdev_raid5f -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:19:52.415 19:18:01 blockdev_raid5f -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:19:52.415 19:18:01 blockdev_raid5f -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:19:52.415 19:18:01 blockdev_raid5f -- bdev/blockdev.sh@711 -- # uname -s 00:19:52.415 19:18:01 blockdev_raid5f -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:19:52.415 19:18:01 blockdev_raid5f -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:19:52.415 19:18:01 blockdev_raid5f -- bdev/blockdev.sh@719 -- # test_type=raid5f 00:19:52.415 19:18:01 blockdev_raid5f -- bdev/blockdev.sh@720 -- # crypto_device= 00:19:52.415 19:18:01 blockdev_raid5f -- bdev/blockdev.sh@721 -- # dek= 00:19:52.415 19:18:01 blockdev_raid5f -- bdev/blockdev.sh@722 -- # env_ctx= 00:19:52.415 19:18:01 blockdev_raid5f -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:19:52.415 19:18:01 blockdev_raid5f -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:19:52.415 19:18:01 blockdev_raid5f -- bdev/blockdev.sh@727 -- # [[ raid5f == bdev ]] 00:19:52.415 19:18:01 blockdev_raid5f -- bdev/blockdev.sh@727 -- # [[ raid5f == crypto_* ]] 00:19:52.415 19:18:01 blockdev_raid5f -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:19:52.415 19:18:01 blockdev_raid5f -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=90049 00:19:52.415 19:18:01 blockdev_raid5f -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:19:52.415 19:18:01 blockdev_raid5f -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:19:52.415 19:18:01 blockdev_raid5f -- bdev/blockdev.sh@49 -- # waitforlisten 90049 00:19:52.415 19:18:01 blockdev_raid5f -- common/autotest_common.sh@835 -- # '[' -z 90049 ']' 00:19:52.415 19:18:01 blockdev_raid5f -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:52.415 19:18:01 blockdev_raid5f -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:52.415 19:18:01 blockdev_raid5f -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:52.415 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:52.415 19:18:01 blockdev_raid5f -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:52.415 19:18:01 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:52.415 [2024-11-27 19:18:01.977299] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:19:52.415 [2024-11-27 19:18:01.977514] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90049 ] 00:19:52.675 [2024-11-27 19:18:02.157210] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:52.675 [2024-11-27 19:18:02.265070] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:53.614 19:18:03 blockdev_raid5f -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:53.614 19:18:03 blockdev_raid5f -- common/autotest_common.sh@868 -- # return 0 00:19:53.614 19:18:03 blockdev_raid5f -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:19:53.614 19:18:03 blockdev_raid5f -- bdev/blockdev.sh@763 -- # setup_raid5f_conf 00:19:53.614 19:18:03 blockdev_raid5f -- bdev/blockdev.sh@279 -- # rpc_cmd 00:19:53.614 19:18:03 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.614 19:18:03 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:53.614 Malloc0 00:19:53.614 Malloc1 00:19:53.614 Malloc2 00:19:53.614 19:18:03 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.614 19:18:03 blockdev_raid5f -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:19:53.614 19:18:03 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.614 19:18:03 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:53.614 19:18:03 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.614 19:18:03 blockdev_raid5f -- bdev/blockdev.sh@777 -- # cat 00:19:53.614 19:18:03 blockdev_raid5f -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:19:53.614 19:18:03 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.614 19:18:03 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:53.614 19:18:03 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.614 19:18:03 blockdev_raid5f -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:19:53.615 19:18:03 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.615 19:18:03 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:53.615 19:18:03 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.615 19:18:03 blockdev_raid5f -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:19:53.615 19:18:03 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.615 19:18:03 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:53.615 19:18:03 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.615 19:18:03 blockdev_raid5f -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:19:53.615 19:18:03 blockdev_raid5f -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:19:53.615 19:18:03 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.615 19:18:03 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:53.615 19:18:03 blockdev_raid5f -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:19:53.615 19:18:03 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.875 19:18:03 blockdev_raid5f -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:19:53.875 19:18:03 blockdev_raid5f -- bdev/blockdev.sh@786 -- # jq -r .name 00:19:53.875 19:18:03 blockdev_raid5f -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "9b354087-7322-465c-bc51-04e7c7864af1"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "9b354087-7322-465c-bc51-04e7c7864af1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "9b354087-7322-465c-bc51-04e7c7864af1",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "db05ce65-b0c9-422e-b140-5d31c0299936",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "ff1b51de-85af-4fab-8888-f77bf383a017",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "7b84b0cd-500d-41b8-a140-17a8453c3822",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:19:53.875 19:18:03 blockdev_raid5f -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:19:53.875 19:18:03 blockdev_raid5f -- bdev/blockdev.sh@789 -- # hello_world_bdev=raid5f 00:19:53.875 19:18:03 blockdev_raid5f -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:19:53.875 19:18:03 blockdev_raid5f -- bdev/blockdev.sh@791 -- # killprocess 90049 00:19:53.875 19:18:03 blockdev_raid5f -- common/autotest_common.sh@954 -- # '[' -z 90049 ']' 00:19:53.875 19:18:03 blockdev_raid5f -- common/autotest_common.sh@958 -- # kill -0 90049 00:19:53.875 19:18:03 blockdev_raid5f -- common/autotest_common.sh@959 -- # uname 00:19:53.875 19:18:03 blockdev_raid5f -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:53.875 19:18:03 blockdev_raid5f -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90049 00:19:53.875 killing process with pid 90049 00:19:53.875 19:18:03 blockdev_raid5f -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:53.875 19:18:03 blockdev_raid5f -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:53.875 19:18:03 blockdev_raid5f -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90049' 00:19:53.875 19:18:03 blockdev_raid5f -- common/autotest_common.sh@973 -- # kill 90049 00:19:53.875 19:18:03 blockdev_raid5f -- common/autotest_common.sh@978 -- # wait 90049 00:19:56.419 19:18:05 blockdev_raid5f -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:19:56.419 19:18:05 blockdev_raid5f -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:19:56.419 19:18:05 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:19:56.419 19:18:05 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:56.419 19:18:05 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:56.419 ************************************ 00:19:56.419 START TEST bdev_hello_world 00:19:56.419 ************************************ 00:19:56.419 19:18:05 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:19:56.419 [2024-11-27 19:18:05.921462] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:19:56.419 [2024-11-27 19:18:05.921566] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90111 ] 00:19:56.679 [2024-11-27 19:18:06.092908] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:56.679 [2024-11-27 19:18:06.199405] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:57.248 [2024-11-27 19:18:06.715422] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:19:57.248 [2024-11-27 19:18:06.715555] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev raid5f 00:19:57.248 [2024-11-27 19:18:06.715583] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:19:57.248 [2024-11-27 19:18:06.716039] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:19:57.248 [2024-11-27 19:18:06.716177] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:19:57.248 [2024-11-27 19:18:06.716194] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:19:57.249 [2024-11-27 19:18:06.716241] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:19:57.249 00:19:57.249 [2024-11-27 19:18:06.716257] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:19:58.631 00:19:58.631 real 0m2.172s 00:19:58.631 user 0m1.796s 00:19:58.631 sys 0m0.251s 00:19:58.631 ************************************ 00:19:58.631 END TEST bdev_hello_world 00:19:58.631 ************************************ 00:19:58.631 19:18:08 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:58.631 19:18:08 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:19:58.631 19:18:08 blockdev_raid5f -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:19:58.631 19:18:08 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:58.631 19:18:08 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:58.631 19:18:08 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:58.631 ************************************ 00:19:58.631 START TEST bdev_bounds 00:19:58.631 ************************************ 00:19:58.631 Process bdevio pid: 90153 00:19:58.631 19:18:08 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:19:58.631 19:18:08 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=90153 00:19:58.631 19:18:08 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:19:58.631 19:18:08 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:19:58.631 19:18:08 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 90153' 00:19:58.631 19:18:08 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 90153 00:19:58.631 19:18:08 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 90153 ']' 00:19:58.631 19:18:08 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:58.631 19:18:08 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:58.631 19:18:08 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:58.631 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:58.631 19:18:08 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:58.631 19:18:08 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:19:58.631 [2024-11-27 19:18:08.172535] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:19:58.631 [2024-11-27 19:18:08.172739] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90153 ] 00:19:58.891 [2024-11-27 19:18:08.344459] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:58.891 [2024-11-27 19:18:08.457419] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:58.891 [2024-11-27 19:18:08.457649] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:58.891 [2024-11-27 19:18:08.457658] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:59.461 19:18:09 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:59.461 19:18:09 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:19:59.461 19:18:09 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:19:59.722 I/O targets: 00:19:59.722 raid5f: 131072 blocks of 512 bytes (64 MiB) 00:19:59.722 00:19:59.722 00:19:59.722 CUnit - A unit testing framework for C - Version 2.1-3 00:19:59.722 http://cunit.sourceforge.net/ 00:19:59.722 00:19:59.722 00:19:59.722 Suite: bdevio tests on: raid5f 00:19:59.722 Test: blockdev write read block ...passed 00:19:59.722 Test: blockdev write zeroes read block ...passed 00:19:59.722 Test: blockdev write zeroes read no split ...passed 00:19:59.722 Test: blockdev write zeroes read split ...passed 00:19:59.983 Test: blockdev write zeroes read split partial ...passed 00:19:59.983 Test: blockdev reset ...passed 00:19:59.983 Test: blockdev write read 8 blocks ...passed 00:19:59.983 Test: blockdev write read size > 128k ...passed 00:19:59.983 Test: blockdev write read invalid size ...passed 00:19:59.983 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:59.983 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:59.983 Test: blockdev write read max offset ...passed 00:19:59.983 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:59.983 Test: blockdev writev readv 8 blocks ...passed 00:19:59.983 Test: blockdev writev readv 30 x 1block ...passed 00:19:59.983 Test: blockdev writev readv block ...passed 00:19:59.983 Test: blockdev writev readv size > 128k ...passed 00:19:59.983 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:59.983 Test: blockdev comparev and writev ...passed 00:19:59.983 Test: blockdev nvme passthru rw ...passed 00:19:59.983 Test: blockdev nvme passthru vendor specific ...passed 00:19:59.983 Test: blockdev nvme admin passthru ...passed 00:19:59.983 Test: blockdev copy ...passed 00:19:59.983 00:19:59.983 Run Summary: Type Total Ran Passed Failed Inactive 00:19:59.983 suites 1 1 n/a 0 0 00:19:59.983 tests 23 23 23 0 0 00:19:59.983 asserts 130 130 130 0 n/a 00:19:59.983 00:19:59.983 Elapsed time = 0.618 seconds 00:19:59.983 0 00:19:59.983 19:18:09 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 90153 00:19:59.983 19:18:09 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 90153 ']' 00:19:59.984 19:18:09 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 90153 00:19:59.984 19:18:09 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:19:59.984 19:18:09 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:59.984 19:18:09 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90153 00:19:59.984 19:18:09 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:59.984 19:18:09 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:59.984 19:18:09 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90153' 00:19:59.984 killing process with pid 90153 00:19:59.984 19:18:09 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@973 -- # kill 90153 00:19:59.984 19:18:09 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@978 -- # wait 90153 00:20:01.367 19:18:10 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:20:01.367 00:20:01.367 real 0m2.689s 00:20:01.367 user 0m6.664s 00:20:01.367 sys 0m0.408s 00:20:01.367 19:18:10 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:01.367 19:18:10 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:20:01.367 ************************************ 00:20:01.367 END TEST bdev_bounds 00:20:01.367 ************************************ 00:20:01.367 19:18:10 blockdev_raid5f -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:20:01.367 19:18:10 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:20:01.367 19:18:10 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:01.367 19:18:10 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:01.367 ************************************ 00:20:01.367 START TEST bdev_nbd 00:20:01.367 ************************************ 00:20:01.367 19:18:10 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:20:01.367 19:18:10 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:20:01.367 19:18:10 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:20:01.367 19:18:10 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:01.367 19:18:10 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:20:01.367 19:18:10 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('raid5f') 00:20:01.367 19:18:10 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:20:01.367 19:18:10 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=1 00:20:01.367 19:18:10 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:20:01.367 19:18:10 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:20:01.367 19:18:10 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:20:01.367 19:18:10 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=1 00:20:01.367 19:18:10 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0') 00:20:01.367 19:18:10 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:20:01.367 19:18:10 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('raid5f') 00:20:01.367 19:18:10 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:20:01.367 19:18:10 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=90218 00:20:01.367 19:18:10 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:20:01.367 19:18:10 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:20:01.367 19:18:10 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 90218 /var/tmp/spdk-nbd.sock 00:20:01.367 19:18:10 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 90218 ']' 00:20:01.367 19:18:10 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:20:01.367 19:18:10 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:01.368 19:18:10 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:20:01.368 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:20:01.368 19:18:10 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:01.368 19:18:10 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:20:01.368 [2024-11-27 19:18:10.961344] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:20:01.368 [2024-11-27 19:18:10.961586] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:01.627 [2024-11-27 19:18:11.142503] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:01.627 [2024-11-27 19:18:11.251372] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:02.197 19:18:11 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:02.197 19:18:11 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:20:02.197 19:18:11 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock raid5f 00:20:02.197 19:18:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:02.197 19:18:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('raid5f') 00:20:02.197 19:18:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:20:02.197 19:18:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock raid5f 00:20:02.197 19:18:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:02.197 19:18:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('raid5f') 00:20:02.197 19:18:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:20:02.197 19:18:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:20:02.197 19:18:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:20:02.197 19:18:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:20:02.197 19:18:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:20:02.197 19:18:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f 00:20:02.458 19:18:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:20:02.458 19:18:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:20:02.458 19:18:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:20:02.458 19:18:12 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:20:02.458 19:18:12 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:20:02.458 19:18:12 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:02.458 19:18:12 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:02.458 19:18:12 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:20:02.458 19:18:12 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:20:02.458 19:18:12 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:02.458 19:18:12 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:02.458 19:18:12 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:02.458 1+0 records in 00:20:02.458 1+0 records out 00:20:02.458 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000430391 s, 9.5 MB/s 00:20:02.458 19:18:12 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:02.458 19:18:12 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:20:02.458 19:18:12 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:02.458 19:18:12 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:02.458 19:18:12 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:20:02.458 19:18:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:20:02.458 19:18:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:20:02.458 19:18:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:20:02.748 19:18:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:20:02.748 { 00:20:02.748 "nbd_device": "/dev/nbd0", 00:20:02.748 "bdev_name": "raid5f" 00:20:02.748 } 00:20:02.748 ]' 00:20:02.748 19:18:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:20:02.748 19:18:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:20:02.748 { 00:20:02.748 "nbd_device": "/dev/nbd0", 00:20:02.748 "bdev_name": "raid5f" 00:20:02.748 } 00:20:02.748 ]' 00:20:02.748 19:18:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:20:02.748 19:18:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:20:02.748 19:18:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:02.748 19:18:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:20:02.748 19:18:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:02.748 19:18:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:20:02.748 19:18:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:02.748 19:18:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:20:03.008 19:18:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:03.008 19:18:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:03.008 19:18:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:03.008 19:18:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:03.008 19:18:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:03.008 19:18:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:03.008 19:18:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:20:03.008 19:18:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:20:03.008 19:18:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:20:03.008 19:18:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:03.008 19:18:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:20:03.268 19:18:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:20:03.268 19:18:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:20:03.268 19:18:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:20:03.268 19:18:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:20:03.268 19:18:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:20:03.268 19:18:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:20:03.268 19:18:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:20:03.268 19:18:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:20:03.268 19:18:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:20:03.268 19:18:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:20:03.268 19:18:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:20:03.268 19:18:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:20:03.268 19:18:12 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:20:03.268 19:18:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:03.268 19:18:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('raid5f') 00:20:03.268 19:18:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:20:03.268 19:18:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:20:03.268 19:18:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:20:03.268 19:18:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:20:03.268 19:18:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:03.268 19:18:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('raid5f') 00:20:03.268 19:18:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:03.268 19:18:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:20:03.268 19:18:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:03.268 19:18:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:20:03.268 19:18:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:03.268 19:18:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:03.268 19:18:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f /dev/nbd0 00:20:03.529 /dev/nbd0 00:20:03.529 19:18:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:03.529 19:18:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:03.529 19:18:13 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:20:03.529 19:18:13 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:20:03.529 19:18:13 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:03.529 19:18:13 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:03.529 19:18:13 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:20:03.529 19:18:13 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:20:03.529 19:18:13 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:03.529 19:18:13 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:03.529 19:18:13 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:03.529 1+0 records in 00:20:03.529 1+0 records out 00:20:03.529 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000669867 s, 6.1 MB/s 00:20:03.529 19:18:13 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:03.529 19:18:13 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:20:03.529 19:18:13 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:03.529 19:18:13 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:03.529 19:18:13 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:20:03.529 19:18:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:03.529 19:18:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:03.529 19:18:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:20:03.529 19:18:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:03.529 19:18:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:20:03.789 19:18:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:20:03.789 { 00:20:03.789 "nbd_device": "/dev/nbd0", 00:20:03.789 "bdev_name": "raid5f" 00:20:03.789 } 00:20:03.789 ]' 00:20:03.789 19:18:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:20:03.789 19:18:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:20:03.789 { 00:20:03.789 "nbd_device": "/dev/nbd0", 00:20:03.789 "bdev_name": "raid5f" 00:20:03.789 } 00:20:03.789 ]' 00:20:03.789 19:18:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:20:03.789 19:18:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:20:03.789 19:18:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:20:03.789 19:18:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=1 00:20:03.789 19:18:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 1 00:20:03.789 19:18:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=1 00:20:03.789 19:18:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:20:03.789 19:18:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:20:03.789 19:18:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:20:03.789 19:18:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:20:03.789 19:18:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:20:03.789 19:18:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:20:03.789 19:18:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:20:03.789 19:18:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:20:03.789 256+0 records in 00:20:03.789 256+0 records out 00:20:03.789 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0139509 s, 75.2 MB/s 00:20:03.789 19:18:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:20:03.789 19:18:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:20:03.789 256+0 records in 00:20:03.789 256+0 records out 00:20:03.789 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0336194 s, 31.2 MB/s 00:20:03.789 19:18:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:20:03.789 19:18:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:20:03.789 19:18:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:20:03.789 19:18:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:20:03.789 19:18:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:20:03.789 19:18:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:20:03.789 19:18:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:20:03.789 19:18:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:20:03.789 19:18:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:20:03.789 19:18:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:20:03.789 19:18:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:20:03.789 19:18:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:03.789 19:18:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:20:03.789 19:18:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:03.789 19:18:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:20:03.789 19:18:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:03.789 19:18:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:20:04.049 19:18:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:04.049 19:18:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:04.049 19:18:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:04.049 19:18:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:04.049 19:18:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:04.049 19:18:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:04.049 19:18:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:20:04.049 19:18:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:20:04.049 19:18:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:20:04.049 19:18:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:04.049 19:18:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:20:04.310 19:18:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:20:04.310 19:18:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:20:04.310 19:18:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:20:04.310 19:18:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:20:04.310 19:18:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:20:04.310 19:18:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:20:04.310 19:18:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:20:04.310 19:18:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:20:04.310 19:18:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:20:04.310 19:18:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:20:04.310 19:18:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:20:04.310 19:18:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:20:04.310 19:18:13 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:20:04.310 19:18:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:04.310 19:18:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:20:04.310 19:18:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:20:04.574 malloc_lvol_verify 00:20:04.574 19:18:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:20:04.846 f397b5a9-a7b5-4f6c-8fdc-5f9e5d16eb81 00:20:04.846 19:18:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:20:04.846 fdaeb433-b6ee-47c4-a54a-35ac12862f77 00:20:05.146 19:18:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:20:05.146 /dev/nbd0 00:20:05.146 19:18:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:20:05.146 19:18:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:20:05.146 19:18:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:20:05.146 19:18:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:20:05.146 19:18:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:20:05.146 mke2fs 1.47.0 (5-Feb-2023) 00:20:05.146 Discarding device blocks: 0/4096 done 00:20:05.146 Creating filesystem with 4096 1k blocks and 1024 inodes 00:20:05.146 00:20:05.146 Allocating group tables: 0/1 done 00:20:05.146 Writing inode tables: 0/1 done 00:20:05.146 Creating journal (1024 blocks): done 00:20:05.146 Writing superblocks and filesystem accounting information: 0/1 done 00:20:05.146 00:20:05.146 19:18:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:20:05.146 19:18:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:05.146 19:18:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:20:05.146 19:18:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:05.146 19:18:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:20:05.146 19:18:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:05.146 19:18:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:20:05.427 19:18:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:05.427 19:18:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:05.427 19:18:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:05.427 19:18:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:05.427 19:18:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:05.427 19:18:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:05.427 19:18:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:20:05.427 19:18:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:20:05.427 19:18:14 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 90218 00:20:05.427 19:18:14 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 90218 ']' 00:20:05.427 19:18:14 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 90218 00:20:05.427 19:18:14 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:20:05.427 19:18:14 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:05.427 19:18:14 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90218 00:20:05.427 killing process with pid 90218 00:20:05.427 19:18:14 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:05.427 19:18:14 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:05.427 19:18:14 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90218' 00:20:05.427 19:18:14 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@973 -- # kill 90218 00:20:05.427 19:18:14 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@978 -- # wait 90218 00:20:06.811 ************************************ 00:20:06.811 END TEST bdev_nbd 00:20:06.811 ************************************ 00:20:06.811 19:18:16 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:20:06.811 00:20:06.811 real 0m5.536s 00:20:06.811 user 0m7.407s 00:20:06.811 sys 0m1.384s 00:20:06.811 19:18:16 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:06.811 19:18:16 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:20:07.072 19:18:16 blockdev_raid5f -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:20:07.072 19:18:16 blockdev_raid5f -- bdev/blockdev.sh@801 -- # '[' raid5f = nvme ']' 00:20:07.072 19:18:16 blockdev_raid5f -- bdev/blockdev.sh@801 -- # '[' raid5f = gpt ']' 00:20:07.072 19:18:16 blockdev_raid5f -- bdev/blockdev.sh@805 -- # run_test bdev_fio fio_test_suite '' 00:20:07.072 19:18:16 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:07.072 19:18:16 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:07.072 19:18:16 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:07.072 ************************************ 00:20:07.072 START TEST bdev_fio 00:20:07.072 ************************************ 00:20:07.072 19:18:16 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:20:07.072 19:18:16 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:20:07.072 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:20:07.072 19:18:16 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:20:07.072 19:18:16 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:20:07.072 19:18:16 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:20:07.072 19:18:16 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:20:07.072 19:18:16 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:20:07.072 19:18:16 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:20:07.073 19:18:16 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:20:07.073 19:18:16 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:20:07.073 19:18:16 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:20:07.073 19:18:16 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:20:07.073 19:18:16 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:20:07.073 19:18:16 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:20:07.073 19:18:16 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:20:07.073 19:18:16 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:20:07.073 19:18:16 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:20:07.073 19:18:16 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:20:07.073 19:18:16 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:20:07.073 19:18:16 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:20:07.073 19:18:16 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:20:07.073 19:18:16 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:20:07.073 19:18:16 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:20:07.073 19:18:16 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:20:07.073 19:18:16 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:20:07.073 19:18:16 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_raid5f]' 00:20:07.073 19:18:16 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=raid5f 00:20:07.073 19:18:16 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:20:07.073 19:18:16 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:20:07.073 19:18:16 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:20:07.073 19:18:16 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:07.073 19:18:16 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:20:07.073 ************************************ 00:20:07.073 START TEST bdev_fio_rw_verify 00:20:07.073 ************************************ 00:20:07.073 19:18:16 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:20:07.073 19:18:16 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:20:07.073 19:18:16 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:20:07.073 19:18:16 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:07.073 19:18:16 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:20:07.073 19:18:16 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:07.073 19:18:16 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:20:07.073 19:18:16 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:20:07.073 19:18:16 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:07.073 19:18:16 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:07.073 19:18:16 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:20:07.073 19:18:16 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:07.073 19:18:16 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:20:07.073 19:18:16 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:20:07.073 19:18:16 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:20:07.073 19:18:16 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:07.073 19:18:16 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:20:07.334 job_raid5f: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:20:07.334 fio-3.35 00:20:07.334 Starting 1 thread 00:20:19.554 00:20:19.554 job_raid5f: (groupid=0, jobs=1): err= 0: pid=90420: Wed Nov 27 19:18:27 2024 00:20:19.554 read: IOPS=12.4k, BW=48.3MiB/s (50.6MB/s)(483MiB/10001msec) 00:20:19.554 slat (nsec): min=17702, max=57233, avg=19618.91, stdev=1908.79 00:20:19.554 clat (usec): min=10, max=298, avg=129.91, stdev=46.32 00:20:19.554 lat (usec): min=30, max=320, avg=149.53, stdev=46.52 00:20:19.554 clat percentiles (usec): 00:20:19.554 | 50.000th=[ 135], 99.000th=[ 217], 99.900th=[ 241], 99.990th=[ 269], 00:20:19.554 | 99.999th=[ 293] 00:20:19.554 write: IOPS=13.0k, BW=50.8MiB/s (53.2MB/s)(501MiB/9874msec); 0 zone resets 00:20:19.554 slat (usec): min=7, max=235, avg=15.99, stdev= 3.48 00:20:19.554 clat (usec): min=59, max=1344, avg=297.22, stdev=39.80 00:20:19.554 lat (usec): min=74, max=1580, avg=313.21, stdev=40.77 00:20:19.554 clat percentiles (usec): 00:20:19.554 | 50.000th=[ 302], 99.000th=[ 375], 99.900th=[ 553], 99.990th=[ 1139], 00:20:19.554 | 99.999th=[ 1287] 00:20:19.554 bw ( KiB/s): min=48688, max=54792, per=98.79%, avg=51341.47, stdev=1466.01, samples=19 00:20:19.554 iops : min=12172, max=13698, avg=12835.37, stdev=366.50, samples=19 00:20:19.554 lat (usec) : 20=0.01%, 50=0.01%, 100=16.35%, 250=38.91%, 500=44.67% 00:20:19.554 lat (usec) : 750=0.04%, 1000=0.02% 00:20:19.554 lat (msec) : 2=0.01% 00:20:19.554 cpu : usr=98.85%, sys=0.48%, ctx=27, majf=0, minf=10144 00:20:19.554 IO depths : 1=7.6%, 2=19.8%, 4=55.3%, 8=17.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:19.554 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:19.554 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:19.554 issued rwts: total=123585,128290,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:19.554 latency : target=0, window=0, percentile=100.00%, depth=8 00:20:19.554 00:20:19.554 Run status group 0 (all jobs): 00:20:19.554 READ: bw=48.3MiB/s (50.6MB/s), 48.3MiB/s-48.3MiB/s (50.6MB/s-50.6MB/s), io=483MiB (506MB), run=10001-10001msec 00:20:19.554 WRITE: bw=50.8MiB/s (53.2MB/s), 50.8MiB/s-50.8MiB/s (53.2MB/s-53.2MB/s), io=501MiB (525MB), run=9874-9874msec 00:20:19.813 ----------------------------------------------------- 00:20:19.813 Suppressions used: 00:20:19.813 count bytes template 00:20:19.813 1 7 /usr/src/fio/parse.c 00:20:19.813 860 82560 /usr/src/fio/iolog.c 00:20:19.813 1 8 libtcmalloc_minimal.so 00:20:19.813 1 904 libcrypto.so 00:20:19.813 ----------------------------------------------------- 00:20:19.813 00:20:19.813 00:20:19.813 real 0m12.725s 00:20:19.813 user 0m13.076s 00:20:19.813 sys 0m0.657s 00:20:19.813 19:18:29 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:19.813 19:18:29 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:20:19.813 ************************************ 00:20:19.813 END TEST bdev_fio_rw_verify 00:20:19.813 ************************************ 00:20:19.813 19:18:29 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:20:19.813 19:18:29 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:20:19.813 19:18:29 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:20:19.813 19:18:29 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:20:19.813 19:18:29 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:20:19.813 19:18:29 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:20:19.813 19:18:29 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:20:19.813 19:18:29 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:20:19.813 19:18:29 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:20:19.813 19:18:29 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:20:19.813 19:18:29 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:20:19.813 19:18:29 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:20:19.813 19:18:29 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:20:19.813 19:18:29 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:20:19.813 19:18:29 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:20:19.813 19:18:29 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:20:20.073 19:18:29 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:20:20.073 19:18:29 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "9b354087-7322-465c-bc51-04e7c7864af1"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "9b354087-7322-465c-bc51-04e7c7864af1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "9b354087-7322-465c-bc51-04e7c7864af1",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "db05ce65-b0c9-422e-b140-5d31c0299936",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "ff1b51de-85af-4fab-8888-f77bf383a017",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "7b84b0cd-500d-41b8-a140-17a8453c3822",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:20:20.073 19:18:29 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:20:20.073 19:18:29 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:20:20.073 /home/vagrant/spdk_repo/spdk 00:20:20.073 19:18:29 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:20:20.073 19:18:29 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:20:20.073 19:18:29 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:20:20.073 00:20:20.073 real 0m13.036s 00:20:20.073 user 0m13.201s 00:20:20.073 sys 0m0.806s 00:20:20.073 19:18:29 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:20.073 19:18:29 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:20:20.073 ************************************ 00:20:20.073 END TEST bdev_fio 00:20:20.073 ************************************ 00:20:20.073 19:18:29 blockdev_raid5f -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:20:20.073 19:18:29 blockdev_raid5f -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:20:20.073 19:18:29 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:20:20.073 19:18:29 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:20.073 19:18:29 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:20.073 ************************************ 00:20:20.073 START TEST bdev_verify 00:20:20.073 ************************************ 00:20:20.073 19:18:29 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:20:20.073 [2024-11-27 19:18:29.672988] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:20:20.073 [2024-11-27 19:18:29.673098] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90582 ] 00:20:20.334 [2024-11-27 19:18:29.849389] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:20.334 [2024-11-27 19:18:29.961968] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:20.334 [2024-11-27 19:18:29.961994] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:20.903 Running I/O for 5 seconds... 00:20:22.855 10392.00 IOPS, 40.59 MiB/s [2024-11-27T19:18:33.875Z] 10507.00 IOPS, 41.04 MiB/s [2024-11-27T19:18:34.815Z] 10542.67 IOPS, 41.18 MiB/s [2024-11-27T19:18:35.756Z] 10562.50 IOPS, 41.26 MiB/s [2024-11-27T19:18:35.756Z] 10577.40 IOPS, 41.32 MiB/s 00:20:26.120 Latency(us) 00:20:26.120 [2024-11-27T19:18:35.756Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:26.120 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:20:26.120 Verification LBA range: start 0x0 length 0x2000 00:20:26.120 raid5f : 5.02 4274.03 16.70 0.00 0.00 45130.04 128.78 32739.38 00:20:26.120 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:26.120 Verification LBA range: start 0x2000 length 0x2000 00:20:26.120 raid5f : 5.01 6293.51 24.58 0.00 0.00 30669.00 334.48 22322.31 00:20:26.120 [2024-11-27T19:18:35.756Z] =================================================================================================================== 00:20:26.120 [2024-11-27T19:18:35.756Z] Total : 10567.54 41.28 0.00 0.00 36526.79 128.78 32739.38 00:20:27.502 00:20:27.502 real 0m7.229s 00:20:27.502 user 0m13.364s 00:20:27.502 sys 0m0.286s 00:20:27.502 ************************************ 00:20:27.502 END TEST bdev_verify 00:20:27.502 ************************************ 00:20:27.502 19:18:36 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:27.502 19:18:36 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:20:27.502 19:18:36 blockdev_raid5f -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:20:27.502 19:18:36 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:20:27.502 19:18:36 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:27.502 19:18:36 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:27.502 ************************************ 00:20:27.502 START TEST bdev_verify_big_io 00:20:27.502 ************************************ 00:20:27.502 19:18:36 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:20:27.502 [2024-11-27 19:18:36.976543] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:20:27.502 [2024-11-27 19:18:36.976724] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90679 ] 00:20:27.762 [2024-11-27 19:18:37.155544] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:27.762 [2024-11-27 19:18:37.263573] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:27.762 [2024-11-27 19:18:37.263602] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:28.331 Running I/O for 5 seconds... 00:20:30.280 633.00 IOPS, 39.56 MiB/s [2024-11-27T19:18:40.857Z] 760.00 IOPS, 47.50 MiB/s [2024-11-27T19:18:42.238Z] 761.33 IOPS, 47.58 MiB/s [2024-11-27T19:18:43.178Z] 776.50 IOPS, 48.53 MiB/s [2024-11-27T19:18:43.178Z] 761.60 IOPS, 47.60 MiB/s 00:20:33.542 Latency(us) 00:20:33.542 [2024-11-27T19:18:43.178Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:33.542 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:20:33.542 Verification LBA range: start 0x0 length 0x200 00:20:33.542 raid5f : 5.17 343.61 21.48 0.00 0.00 9262166.88 169.03 397451.51 00:20:33.542 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:20:33.542 Verification LBA range: start 0x200 length 0x200 00:20:33.542 raid5f : 5.22 437.86 27.37 0.00 0.00 7339482.17 170.82 318693.84 00:20:33.542 [2024-11-27T19:18:43.178Z] =================================================================================================================== 00:20:33.542 [2024-11-27T19:18:43.178Z] Total : 781.47 48.84 0.00 0.00 8180656.73 169.03 397451.51 00:20:34.924 00:20:34.924 real 0m7.462s 00:20:34.924 user 0m13.842s 00:20:34.924 sys 0m0.271s 00:20:34.924 ************************************ 00:20:34.924 END TEST bdev_verify_big_io 00:20:34.924 ************************************ 00:20:34.924 19:18:44 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:34.924 19:18:44 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:20:34.924 19:18:44 blockdev_raid5f -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:34.924 19:18:44 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:20:34.924 19:18:44 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:34.924 19:18:44 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:34.924 ************************************ 00:20:34.924 START TEST bdev_write_zeroes 00:20:34.924 ************************************ 00:20:34.924 19:18:44 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:34.924 [2024-11-27 19:18:44.517377] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:20:34.924 [2024-11-27 19:18:44.517484] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90779 ] 00:20:35.183 [2024-11-27 19:18:44.690505] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:35.183 [2024-11-27 19:18:44.796373] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:35.754 Running I/O for 1 seconds... 00:20:37.137 30063.00 IOPS, 117.43 MiB/s 00:20:37.137 Latency(us) 00:20:37.137 [2024-11-27T19:18:46.773Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:37.137 Job: raid5f (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:20:37.137 raid5f : 1.01 30038.49 117.34 0.00 0.00 4248.94 1252.05 5780.90 00:20:37.137 [2024-11-27T19:18:46.773Z] =================================================================================================================== 00:20:37.137 [2024-11-27T19:18:46.773Z] Total : 30038.49 117.34 0.00 0.00 4248.94 1252.05 5780.90 00:20:38.077 00:20:38.077 real 0m3.216s 00:20:38.077 user 0m2.826s 00:20:38.077 sys 0m0.259s 00:20:38.077 19:18:47 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:38.077 19:18:47 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:20:38.077 ************************************ 00:20:38.077 END TEST bdev_write_zeroes 00:20:38.077 ************************************ 00:20:38.077 19:18:47 blockdev_raid5f -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:38.077 19:18:47 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:20:38.077 19:18:47 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:38.077 19:18:47 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:38.338 ************************************ 00:20:38.338 START TEST bdev_json_nonenclosed 00:20:38.338 ************************************ 00:20:38.338 19:18:47 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:38.338 [2024-11-27 19:18:47.811656] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:20:38.338 [2024-11-27 19:18:47.811882] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90838 ] 00:20:38.599 [2024-11-27 19:18:47.989339] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:38.599 [2024-11-27 19:18:48.093717] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:38.599 [2024-11-27 19:18:48.093884] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:20:38.599 [2024-11-27 19:18:48.093947] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:20:38.599 [2024-11-27 19:18:48.093968] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:20:38.860 00:20:38.860 real 0m0.611s 00:20:38.860 user 0m0.367s 00:20:38.860 sys 0m0.139s 00:20:38.860 19:18:48 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:38.860 19:18:48 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:20:38.860 ************************************ 00:20:38.861 END TEST bdev_json_nonenclosed 00:20:38.861 ************************************ 00:20:38.861 19:18:48 blockdev_raid5f -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:38.861 19:18:48 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:20:38.861 19:18:48 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:38.861 19:18:48 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:38.861 ************************************ 00:20:38.861 START TEST bdev_json_nonarray 00:20:38.861 ************************************ 00:20:38.861 19:18:48 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:38.861 [2024-11-27 19:18:48.493638] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:20:38.861 [2024-11-27 19:18:48.493851] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90862 ] 00:20:39.121 [2024-11-27 19:18:48.666069] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:39.380 [2024-11-27 19:18:48.772171] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:39.380 [2024-11-27 19:18:48.772286] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:20:39.380 [2024-11-27 19:18:48.772303] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:20:39.380 [2024-11-27 19:18:48.772320] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:20:39.380 00:20:39.380 real 0m0.603s 00:20:39.380 user 0m0.368s 00:20:39.380 sys 0m0.131s 00:20:39.380 19:18:49 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:39.380 ************************************ 00:20:39.380 END TEST bdev_json_nonarray 00:20:39.380 ************************************ 00:20:39.380 19:18:49 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:20:39.640 19:18:49 blockdev_raid5f -- bdev/blockdev.sh@824 -- # [[ raid5f == bdev ]] 00:20:39.640 19:18:49 blockdev_raid5f -- bdev/blockdev.sh@832 -- # [[ raid5f == gpt ]] 00:20:39.640 19:18:49 blockdev_raid5f -- bdev/blockdev.sh@836 -- # [[ raid5f == crypto_sw ]] 00:20:39.640 19:18:49 blockdev_raid5f -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:20:39.640 19:18:49 blockdev_raid5f -- bdev/blockdev.sh@849 -- # cleanup 00:20:39.640 19:18:49 blockdev_raid5f -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:20:39.640 19:18:49 blockdev_raid5f -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:20:39.640 19:18:49 blockdev_raid5f -- bdev/blockdev.sh@26 -- # [[ raid5f == rbd ]] 00:20:39.640 19:18:49 blockdev_raid5f -- bdev/blockdev.sh@30 -- # [[ raid5f == daos ]] 00:20:39.640 19:18:49 blockdev_raid5f -- bdev/blockdev.sh@34 -- # [[ raid5f = \g\p\t ]] 00:20:39.640 19:18:49 blockdev_raid5f -- bdev/blockdev.sh@40 -- # [[ raid5f == xnvme ]] 00:20:39.640 00:20:39.640 real 0m47.468s 00:20:39.640 user 1m4.046s 00:20:39.640 sys 0m5.107s 00:20:39.640 19:18:49 blockdev_raid5f -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:39.640 ************************************ 00:20:39.640 END TEST blockdev_raid5f 00:20:39.640 ************************************ 00:20:39.640 19:18:49 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:39.640 19:18:49 -- spdk/autotest.sh@194 -- # uname -s 00:20:39.640 19:18:49 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:20:39.640 19:18:49 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:20:39.640 19:18:49 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:20:39.640 19:18:49 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:20:39.640 19:18:49 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:20:39.640 19:18:49 -- spdk/autotest.sh@260 -- # timing_exit lib 00:20:39.640 19:18:49 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:39.640 19:18:49 -- common/autotest_common.sh@10 -- # set +x 00:20:39.640 19:18:49 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:20:39.640 19:18:49 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:20:39.640 19:18:49 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:20:39.640 19:18:49 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:20:39.640 19:18:49 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:20:39.640 19:18:49 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:20:39.640 19:18:49 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:20:39.640 19:18:49 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:20:39.640 19:18:49 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:20:39.640 19:18:49 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:20:39.640 19:18:49 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:20:39.640 19:18:49 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:20:39.640 19:18:49 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:20:39.640 19:18:49 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:20:39.640 19:18:49 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:20:39.640 19:18:49 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:20:39.640 19:18:49 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:20:39.640 19:18:49 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:20:39.640 19:18:49 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:20:39.640 19:18:49 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:20:39.640 19:18:49 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:39.640 19:18:49 -- common/autotest_common.sh@10 -- # set +x 00:20:39.640 19:18:49 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:20:39.640 19:18:49 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:20:39.640 19:18:49 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:20:39.640 19:18:49 -- common/autotest_common.sh@10 -- # set +x 00:20:42.215 INFO: APP EXITING 00:20:42.215 INFO: killing all VMs 00:20:42.215 INFO: killing vhost app 00:20:42.215 INFO: EXIT DONE 00:20:42.488 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:42.488 Waiting for block devices as requested 00:20:42.488 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:20:42.749 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:20:43.691 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:43.691 Cleaning 00:20:43.691 Removing: /var/run/dpdk/spdk0/config 00:20:43.691 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:20:43.691 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:20:43.691 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:20:43.691 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:20:43.691 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:20:43.691 Removing: /var/run/dpdk/spdk0/hugepage_info 00:20:43.691 Removing: /dev/shm/spdk_tgt_trace.pid56944 00:20:43.691 Removing: /var/run/dpdk/spdk0 00:20:43.691 Removing: /var/run/dpdk/spdk_pid56698 00:20:43.691 Removing: /var/run/dpdk/spdk_pid56944 00:20:43.691 Removing: /var/run/dpdk/spdk_pid57184 00:20:43.691 Removing: /var/run/dpdk/spdk_pid57288 00:20:43.691 Removing: /var/run/dpdk/spdk_pid57344 00:20:43.691 Removing: /var/run/dpdk/spdk_pid57483 00:20:43.691 Removing: /var/run/dpdk/spdk_pid57501 00:20:43.691 Removing: /var/run/dpdk/spdk_pid57711 00:20:43.691 Removing: /var/run/dpdk/spdk_pid57827 00:20:43.691 Removing: /var/run/dpdk/spdk_pid57935 00:20:43.691 Removing: /var/run/dpdk/spdk_pid58063 00:20:43.691 Removing: /var/run/dpdk/spdk_pid58171 00:20:43.691 Removing: /var/run/dpdk/spdk_pid58216 00:20:43.691 Removing: /var/run/dpdk/spdk_pid58252 00:20:43.691 Removing: /var/run/dpdk/spdk_pid58323 00:20:43.691 Removing: /var/run/dpdk/spdk_pid58451 00:20:43.691 Removing: /var/run/dpdk/spdk_pid58898 00:20:43.691 Removing: /var/run/dpdk/spdk_pid58975 00:20:43.691 Removing: /var/run/dpdk/spdk_pid59055 00:20:43.691 Removing: /var/run/dpdk/spdk_pid59071 00:20:43.691 Removing: /var/run/dpdk/spdk_pid59229 00:20:43.691 Removing: /var/run/dpdk/spdk_pid59251 00:20:43.691 Removing: /var/run/dpdk/spdk_pid59405 00:20:43.691 Removing: /var/run/dpdk/spdk_pid59426 00:20:43.691 Removing: /var/run/dpdk/spdk_pid59496 00:20:43.691 Removing: /var/run/dpdk/spdk_pid59518 00:20:43.691 Removing: /var/run/dpdk/spdk_pid59589 00:20:43.952 Removing: /var/run/dpdk/spdk_pid59607 00:20:43.952 Removing: /var/run/dpdk/spdk_pid59809 00:20:43.952 Removing: /var/run/dpdk/spdk_pid59845 00:20:43.952 Removing: /var/run/dpdk/spdk_pid59936 00:20:43.952 Removing: /var/run/dpdk/spdk_pid61295 00:20:43.952 Removing: /var/run/dpdk/spdk_pid61512 00:20:43.952 Removing: /var/run/dpdk/spdk_pid61652 00:20:43.952 Removing: /var/run/dpdk/spdk_pid62301 00:20:43.952 Removing: /var/run/dpdk/spdk_pid62507 00:20:43.952 Removing: /var/run/dpdk/spdk_pid62652 00:20:43.952 Removing: /var/run/dpdk/spdk_pid63296 00:20:43.952 Removing: /var/run/dpdk/spdk_pid63626 00:20:43.952 Removing: /var/run/dpdk/spdk_pid63773 00:20:43.952 Removing: /var/run/dpdk/spdk_pid65162 00:20:43.952 Removing: /var/run/dpdk/spdk_pid65415 00:20:43.952 Removing: /var/run/dpdk/spdk_pid65561 00:20:43.952 Removing: /var/run/dpdk/spdk_pid66946 00:20:43.952 Removing: /var/run/dpdk/spdk_pid67205 00:20:43.952 Removing: /var/run/dpdk/spdk_pid67350 00:20:43.952 Removing: /var/run/dpdk/spdk_pid68741 00:20:43.952 Removing: /var/run/dpdk/spdk_pid69187 00:20:43.952 Removing: /var/run/dpdk/spdk_pid69327 00:20:43.952 Removing: /var/run/dpdk/spdk_pid70823 00:20:43.952 Removing: /var/run/dpdk/spdk_pid71091 00:20:43.952 Removing: /var/run/dpdk/spdk_pid71237 00:20:43.952 Removing: /var/run/dpdk/spdk_pid72734 00:20:43.952 Removing: /var/run/dpdk/spdk_pid72995 00:20:43.952 Removing: /var/run/dpdk/spdk_pid73141 00:20:43.952 Removing: /var/run/dpdk/spdk_pid74632 00:20:43.952 Removing: /var/run/dpdk/spdk_pid75119 00:20:43.952 Removing: /var/run/dpdk/spdk_pid75270 00:20:43.952 Removing: /var/run/dpdk/spdk_pid75414 00:20:43.952 Removing: /var/run/dpdk/spdk_pid75843 00:20:43.952 Removing: /var/run/dpdk/spdk_pid76576 00:20:43.952 Removing: /var/run/dpdk/spdk_pid76952 00:20:43.952 Removing: /var/run/dpdk/spdk_pid77635 00:20:43.952 Removing: /var/run/dpdk/spdk_pid78076 00:20:43.952 Removing: /var/run/dpdk/spdk_pid78829 00:20:43.952 Removing: /var/run/dpdk/spdk_pid79238 00:20:43.952 Removing: /var/run/dpdk/spdk_pid81207 00:20:43.952 Removing: /var/run/dpdk/spdk_pid81651 00:20:43.952 Removing: /var/run/dpdk/spdk_pid82086 00:20:43.952 Removing: /var/run/dpdk/spdk_pid84180 00:20:43.952 Removing: /var/run/dpdk/spdk_pid84670 00:20:43.952 Removing: /var/run/dpdk/spdk_pid85186 00:20:43.952 Removing: /var/run/dpdk/spdk_pid86244 00:20:43.952 Removing: /var/run/dpdk/spdk_pid86571 00:20:43.952 Removing: /var/run/dpdk/spdk_pid87514 00:20:43.952 Removing: /var/run/dpdk/spdk_pid87832 00:20:43.952 Removing: /var/run/dpdk/spdk_pid88769 00:20:43.952 Removing: /var/run/dpdk/spdk_pid89096 00:20:43.952 Removing: /var/run/dpdk/spdk_pid89769 00:20:43.952 Removing: /var/run/dpdk/spdk_pid90049 00:20:43.952 Removing: /var/run/dpdk/spdk_pid90111 00:20:43.952 Removing: /var/run/dpdk/spdk_pid90153 00:20:44.212 Removing: /var/run/dpdk/spdk_pid90405 00:20:44.212 Removing: /var/run/dpdk/spdk_pid90582 00:20:44.212 Removing: /var/run/dpdk/spdk_pid90679 00:20:44.212 Removing: /var/run/dpdk/spdk_pid90779 00:20:44.212 Removing: /var/run/dpdk/spdk_pid90838 00:20:44.212 Removing: /var/run/dpdk/spdk_pid90862 00:20:44.212 Clean 00:20:44.212 19:18:53 -- common/autotest_common.sh@1453 -- # return 0 00:20:44.212 19:18:53 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:20:44.212 19:18:53 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:44.212 19:18:53 -- common/autotest_common.sh@10 -- # set +x 00:20:44.212 19:18:53 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:20:44.213 19:18:53 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:44.213 19:18:53 -- common/autotest_common.sh@10 -- # set +x 00:20:44.213 19:18:53 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:20:44.213 19:18:53 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:20:44.213 19:18:53 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:20:44.213 19:18:53 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:20:44.213 19:18:53 -- spdk/autotest.sh@398 -- # hostname 00:20:44.213 19:18:53 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:20:44.473 geninfo: WARNING: invalid characters removed from testname! 00:21:11.034 19:19:18 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:12.417 19:19:21 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:14.328 19:19:23 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:16.238 19:19:25 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:18.777 19:19:27 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:20.688 19:19:29 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:22.600 19:19:31 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:21:22.600 19:19:32 -- spdk/autorun.sh@1 -- $ timing_finish 00:21:22.600 19:19:32 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:21:22.600 19:19:32 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:21:22.600 19:19:32 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:21:22.600 19:19:32 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:21:22.600 + [[ -n 5423 ]] 00:21:22.600 + sudo kill 5423 00:21:22.610 [Pipeline] } 00:21:22.627 [Pipeline] // timeout 00:21:22.633 [Pipeline] } 00:21:22.647 [Pipeline] // stage 00:21:22.653 [Pipeline] } 00:21:22.667 [Pipeline] // catchError 00:21:22.677 [Pipeline] stage 00:21:22.679 [Pipeline] { (Stop VM) 00:21:22.692 [Pipeline] sh 00:21:22.976 + vagrant halt 00:21:25.518 ==> default: Halting domain... 00:21:33.665 [Pipeline] sh 00:21:33.950 + vagrant destroy -f 00:21:36.498 ==> default: Removing domain... 00:21:36.539 [Pipeline] sh 00:21:36.852 + mv output /var/jenkins/workspace/raid-vg-autotest/output 00:21:36.862 [Pipeline] } 00:21:36.878 [Pipeline] // stage 00:21:36.883 [Pipeline] } 00:21:36.897 [Pipeline] // dir 00:21:36.903 [Pipeline] } 00:21:36.918 [Pipeline] // wrap 00:21:36.925 [Pipeline] } 00:21:36.938 [Pipeline] // catchError 00:21:36.947 [Pipeline] stage 00:21:36.949 [Pipeline] { (Epilogue) 00:21:36.963 [Pipeline] sh 00:21:37.249 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:21:41.463 [Pipeline] catchError 00:21:41.465 [Pipeline] { 00:21:41.478 [Pipeline] sh 00:21:41.764 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:21:41.764 Artifacts sizes are good 00:21:41.774 [Pipeline] } 00:21:41.787 [Pipeline] // catchError 00:21:41.799 [Pipeline] archiveArtifacts 00:21:41.807 Archiving artifacts 00:21:41.939 [Pipeline] cleanWs 00:21:41.951 [WS-CLEANUP] Deleting project workspace... 00:21:41.951 [WS-CLEANUP] Deferred wipeout is used... 00:21:41.957 [WS-CLEANUP] done 00:21:41.959 [Pipeline] } 00:21:41.975 [Pipeline] // stage 00:21:41.981 [Pipeline] } 00:21:41.995 [Pipeline] // node 00:21:42.000 [Pipeline] End of Pipeline 00:21:42.039 Finished: SUCCESS